Machine Learning and MedicineBy Sam Finlayson, MD-PhD student at Harvard-MIT
http://sgfin.github.io/
Thu, 01 Oct 2020 21:37:26 -0400Thu, 01 Oct 2020 21:37:26 -0400Jekyll v4.1.0Induction, Inductive Biases, and Infusing Knowledge into Learned Representations<p><sub><sup>Note: This post is a modified excerpt from the introduction to my PhD thesis.<sub><sup><br /></sup></sub></sup></sub></p>
<h4>Outline:</h4>
<div style="padding-top: 10px; font-size:large">
<a href="#Induc">Inductive Generalization and Inductive Biases</a><br />
<a href="#Phil">-Philosophical Foundations for the Problem of Induction</a><br />
<a href="#InduBias">-Inductive Biases in Machine Learning</a><br />
<a href="#LearnRep">Learned Representations of Data and Knowledge</a><br />
<a href="#LearnBack">-Background on Representation Learning</a><br />
<a href="#Knowledge">-Infusing Domain Knowledge into Neural Representations</a><br />
</div>
<h2 id="-inductive-generalization-and-inductive-biases"><a name="Induc"></a> Inductive Generalization and Inductive Biases</h2>
<p>Our goal in building machine learning systems is, with rare exceptions, to create algorithms whose utility extends beyond the dataset in which they are trained. In other words, we desire intelligent systems that are capable of generalizing to future data. The process of leveraging observations to draw inferences about the unobserved is the principle of <em>induction</em><label for="induct_term" class="margin-toggle sidenote-number"></label><input type="checkbox" id="induct_term" class="margin-toggle" checked="" /><span class="sidenote">Terminological note: In a non-technical setting, the term <em>inductive</em> – denoting the inference of general laws from particular instances – is typically contrasted with the adjective <em>deductive</em>, which denotes the inference of particular instances from general laws. This broad definition of induction may be used in machine learning to describe, for example, the model fitting process as the <em>inductive step</em> and the deployment on new data as the <em>deductive step</em>. By the same token, some AI methods such as automated theorem provers are described as deductive. In the setting of current ML research, however, it is much more common for the term ‘inductive’ to refer specifically to methods that are structurally capable of operating on new data points without retraining. In contrast, <em>transductive</em> methods require a fixed or pre-specified dataset, and are used to make internal predictions about missing features or labels. While many ML methods are assumed to be inductive in both senses of the term, this section concerns itself primarily with the broader notion of induction as it relates to learning from observed data. In contrast, Chapters 1 and 2 involve the second use of this term, as I propose new methods that are inductive but whose predecessors were transductive <a class="citation" href="#chami2020machine"><span style="vertical-align: super">1</span></a>.</span>.
<!--. --></p>
<h3 id="-philosophical-foundations-for-the-problem-of-induction"><a name="Phil"></a> Philosophical Foundations for the Problem of Induction</h3>
<p>Even ancient philosophers appreciated the tenuity of inductive generalization. As early as the second century, the Greek philosopher Sextus Empiricus argued that the very notion of induction was invalid, a conclusion independently argued by the Charvaka school of philosophy in ancient India <a class="citation" href="#empiricus1933outlines"><span style="vertical-align: super">2,3</span></a>. The so-called “problem of induction,” as it is best known today, was formulated by 18th-century philosopher David Hume in his twin works <em>A Treatise of Human Nature</em> and <em>An Enquiry Concerning Human Understanding</em> <a class="citation" href="#humeTreatise"><span style="vertical-align: super">4,5</span></a>. In these works, Hume argues that all inductive inference hinges upon the premise that the future will follow the past. This premise has since become known as his “Principle of Uniformity of Nature” (or simply, the “Uniformity Principle”), the “Resemblance Principle,” or his “Principle of Extrapolation” <a class="citation" href="#garrett2011reason"><span style="vertical-align: super">6</span></a>.</p>
<p>In the <em>Treatise</em> and the <em>Inquiry</em>, Hume examines various arguments – intuitive, demonstrative, sensible, probabilistic – that could be proposed to establish the principle of extrapolation and, having rejected them all, concludes that inductive inference itself is “not determin’d by reason.” Hume thus places induction outside the scope of reason itself, casting it therefore as non-rational if not irrational. In his 1955 work <em>Fact, Fiction, and Forecast</em>, Nelson Goodman extended and reframed Hume’s arguments, proposing “a new riddle of induction”<a class="citation" href="#goodman1955fact"><span style="vertical-align: super">7</span></a>. For Goodman, the key challenge was not the validity of induction per se, but rather the recognition that for any set of observations, there are multiple contradictory generalizations that could be used to explain them.</p>
<p>At least among scientists, the best known formal response to the problem of induction comes from the philosopher of science Karl Popper. In <em>Conjectures and Refutations</em>, Popper argues that science may sidestep the problem of induction by relying instead upon scientific conjecture followed by criticism <a class="citation" href="#popper2014conjectures"><span style="vertical-align: super">8</span></a>. Stated otherwise, according to Popper, the central goal of scientists should be to formulate falsifiable theories which can be provisionally treated as true when they survive repeated attempts to prove them false <label for="popper_stats" class="margin-toggle sidenote-number"></label><input type="checkbox" id="popper_stats" class="margin-toggle" /><span class="sidenote"> Popper’s framing is frequently used to justify the statistical hypothesis testing frameworks proposed by the likes of Neyman, Pearson, and Fisher. However, the compatibility of Popperian falsification and statistical hypothesis testing is a matter of debate <a class="citation" href="#hilborn1997ecological"><span style="vertical-align: super">9,10,11</span></a>. </span>. Popper’s arguments may be helpful as we frame our evaluation of any specific ML system that has already been trained – and thus instantiated, in a sense, as a “conjecture” that can be refuted. However, the training process of ML systems is itself an act of inductive inference and thus relies on a Uniformity Principle in a way that Popper’s conjecture-refutation framework does not address.</p>
<p>This thesis is not a work of philosophy. However, I consider it important to acknowledge that the entire field of machine learning – the branch of AI concerned with constructing computers that learn from experience <a class="citation" href="#mitchell1997machine"><span style="vertical-align: super">12</span></a> – is predicated upon a core premise that has, for centuries, been recognized as unprovable and arguably non-rational. To boot, even if the inductive framework is accepted as valid, there are an infinite number of contradictory generalizations that are equally consistent with our training data. While these observations may be philosophical in spirit and may appear impractical, they provide a framing for extremely practical questions:</p>
<p>Under which circumstances can we reasonably expect the future to resemble the past, as far as our models are concerned? Given an infinite number of valid generalizations from our data – most of which are presumably useless or even dangerous – what guiding principles do we leverage to choose between them? What are the falsifiable empirical claims that we should be making about our models, and how should we test them? If we are to assume that prospective failure of our systems is the most likely outcome, as Popper would, what reasonable standards can be set to nevertheless trust ML in safety-critical settings such as healthcare?</p>
<p>Each of these questions will be repeatedly considered throughout the course of this thesis.</p>
<h3 id="-inductive-biases-in-machine-learning"><a name="InduBias"></a> Inductive Biases in Machine Learning</h3>
<p>As outlined above, the paradigm of machine learning presupposes the identification – a la Hume – of some set of tasks and environments for which we expect the future to resemble the past. At this point, we are thus forced to determine guiding principles – a la Goodman – that give our models strong <em>a priori</em> preferences for generalizations that we expect to extrapolate well into the future. When such guiding principles are instantiated as design decisions in our models, they are known as <em>inductive biases</em>.</p>
<p>In his 1980 report <em>The Need for Biases in Learning Generalizations</em>, Tom M. Mitchell argues that inductive biases constitute the heart of generalization and indeed a key basis for learning itself:</p>
<div class="epigraph"><blockquote><p>If consistency with the training instances is taken as the sole determiner of appropriate generalizations, then a program can never make the inductive leap necessary to classify instances beyond those it has observed. Only if the program has other sources of information, or biases for choosing one generalization over the other, can it non-arbitrarily classify instances beyond those in the training set....
<br /><br />
The impact of using a biased generalization language is clear: each subset of instances for which there is no expressible generalization is a concept that could be presented to the program, but which the program will be unable to describe and therefore unable to learn. If it is possible to know ahead of time that certain subsets of instances are irrelevant, then it may be useful to leave these out of the generalization language, in order to simplify the learning problem. ...<br /><br />
Although removing all biases from a generalization system may seem to be a desirable goal, in fact the result is nearly useless. An unbiased learning system’s ability to classify new instances is no better than if it simply stored all the training instances and performed a lookup when asked to classify a subsequent instance.</p><footer>Tom M. Mitchell, <cite>The Need for Biases in Learning Generalizations</cite></footer></blockquote></div>
<p>A key challenge of machine learning, therefore, is to design systems whose inductive biases align with the structure of the problem at hand. The effect of such efforts is not merely to endow the model with the capacity to learn key patterns, but also – somewhat paradoxically – to deliberately hamper the capacity of the model to learn other (presumably less useful) patterns, or at least to drive the model away from learning them. In other words, inductive biases stipulate the properties that we believe our model should have in order to generalize to future data; they thus encode our key assumptions about the problem itself.</p>
<p>The machine learning toolkit has a wide array of methods to induce inductive biases in learning systems <label for="inductive_biases" class="margin-toggle">⊕</label><input type="checkbox" id="inductive_biases" class="margin-toggle" checked="" /><span class="marginnote"><img src="/assets/thesis_images/inductive_biases.png" /></span> . For example, regularization methods such as L1-/L2-penalties <a class="citation" href="#tibshirani1996regression"><span style="vertical-align: super">13</span></a>, dropout <a class="citation" href="#srivastava2014dropout"><span style="vertical-align: super">14</span></a>, or early stopping <a class="citation" href="#prechelt1998early"><span style="vertical-align: super">15</span></a> are a simple yet powerful means to impose Occam’s razor onto the training process. By the same token, the maximum margin loss of support vector machines <a class="citation" href="#cortes1995support"><span style="vertical-align: super">16</span></a>, or model selection based on cross-validation can be described as inductive biases <a class="citation" href="#girosi1995regularization"><span style="vertical-align: super">17,18</span></a>. Bayesian methods of almost any form induce inductive biases by placing explicit prior probabilities over model parameters. Machine learning systems that build on symbolic logic, such as inductive logic programming <a class="citation" href="#muggleton1991inductive"><span style="vertical-align: super">19</span></a>, encode established knowledge into very strict inductive biases, by forcing algorithms to reason about training examples explicitly in terms of hypotheses derived from pre-specified databases of facts. As nicely synthesized in Battaglia et al, the standard layer types of modern neural networks each have distinct invariances that induce corresponding <em>relational inductive biases</em>; for example, convolutional layers have spatial translational invariance and induce a relational inductive bias of locality, whereas recurrent layers have a temporal invariance that induces the inductive bias of sequentiality <a class="citation" href="#battaglia2018relational"><span style="vertical-align: super">20</span></a>. Such relational inductive biases are extremely powerful when well-matched to the data on which they are applied.</p>
<p>In the next section, I will introduce the neural representation learning framework – the dominant paradigm of machine learning today – and discuss inductive biases in this setting, with a special emphasis on recent tools for infusing external knowledge into the inductive biases of our models.</p>
<h2 id="learned-representations-of-data-and-knowledge"><a name="LearnRep"></a>Learned Representations of Data and Knowledge</h2>
<p>The performance of most information processing systems, including machine learning systems, typically depends heavily upon the data representations (or features) they employ. Historically, this meant the devotion of significant labor and expertise to <em>feature engineering</em>, the design of data transformations and preprocessing techniques to extract and organize discriminative features from data prior to the application of ML. <em>Representation learning</em><a class="citation" href="#bengio2013representation"><span style="vertical-align: super">21,22</span></a> is an alternative to feature engineering, and refers to the training of learned representations of data (or knowledge graphs <a class="citation" href="#bordes2013translating"><span style="vertical-align: super">23</span></a>) that are optimized for utility in downstream tasks such as prediction or information retrieval.</p>
<h3 id="--background-on-representation-learning"><a name="LearnBack"></a> Background on Representation Learning</h3>
<p>Many canonical methods in statistical learning can be considered representation learning methods. For example, low-dimensional data representations with desirable properties are learned by unsupervised methods such as principal components analysis <a class="citation" href="#pearson1901liii"><span style="vertical-align: super">24</span></a>, k-means clustering <a class="citation" href="#forgy1965cluster"><span style="vertical-align: super">25</span></a>, independent components analysis <a class="citation" href="#jutten1991blind"><span style="vertical-align: super">26</span></a>, and manifold learning methods such as Isomap <a class="citation" href="#tenenbaum2000global"><span style="vertical-align: super">27</span></a> and locally-linear embeddings <a class="citation" href="#roweis2000nonlinear"><span style="vertical-align: super">28</span></a>. Within the field of machine learning, the most popular paradigm for representation learning are neural networks<a class="citation" href="#bengio2013representation"><span style="vertical-align: super">21,22</span></a>, which provide an extremely flexible framework that can in theory be used to approximate any continuous function <a class="citation" href="#cybenko1989approximation"><span style="vertical-align: super">29</span></a>. Over the past two decades, representation learning with neural networks has steadily outperformed traditional feature engineering methods on a large family of tasks, including speech recognition <a class="citation" href="#dahl2010phone"><span style="vertical-align: super">30</span></a>, image processing <a class="citation" href="#hinton2006fast"><span style="vertical-align: super">31</span></a>, and natural language processing <a class="citation" href="#mikolov2011empirical"><span style="vertical-align: super">32</span></a>.</p>
<p>A common feature of all the representation learning methods just mentioned is that they are designed to learn data representations that have lower dimensionality than the original data. This basic inductive bias is motivated by the so-called <em>manifold hypothesis</em>, which states that most real world data – images, text, genomes, etc. – are captured and stored in high dimensions but actually consist of some lower-dimensional data manifold embedded in that high-dimensional space.</p>
<p>Another desirable property of learned representations is that they be <em>distributed representations</em><a class="citation" href="#bengio2013representation"><span style="vertical-align: super">21,22</span></a>, composed of multiple elements that can be set separately from each other. Distributed representations are highly expressive: \(n\) learned features with \(k\) values can represent \(k^n\) different concepts, with each feature element representing a degree of meaning along its own axis. This results in a rich similarity space that improves the generalizability of resultant models. The benefits of distributed representations apply to any data type, but are particularly obvious from a conceptual level when considering settings such as natural language processing <a class="citation" href="#mikolov2013distributed"><span style="vertical-align: super">33</span></a>, where the initial data representation are encoded as symbols that lack any relationship with their underlying meaning. For example, the two sentences (or their equivalent triples, in a knowledge graph setting) ‘ibuprofen impairs renal function’ and ‘Advil damages the kidneys’ have zero tokens or ngrams in common. Thus, machine learning programs based only on symbols would be unable to extrapolate from one sentence to the other without relying upon explicit mappings such as ‘ibuprofen has_name Advil’, ‘impairs has_synonym damages’, etc. In contrast, the distributed representations of these sentences should, in principle, be nearly identical, facilitating direct extrapolation.</p>
<p>Over the past decade, neural networks have established themselves as the <em>de facto</em> approach to representation learning for essentially every ML problem in which their training has been shown feasible <a class="citation" href="#bengio2013representation"><span style="vertical-align: super">21,22</span></a>. While some neural architectures – e.g. Word2vec<a class="citation" href="#mikolov2013efficient"><span style="vertical-align: super">34</span></a> – are designed exclusively to produce embeddings that will be utilized in downstream tasks, the primary appeal of neural networks is that <em>every</em> deep learning architecture serves as a representation learning system. More specifically, the activations of each layer of neurons serves as a distributed representation of the input that is progressively refined in a hierarchical manner to produce representations of increased abstraction with increasing depth <label for="depth_comment" class="margin-toggle sidenote-number"></label><input type="checkbox" id="depth_comment" class="margin-toggle" /><span class="sidenote"> While even single-layer neural networks can provably approximate any continuous function, this guarantee is impractical because the proof assumes an infinite number of hidden nodes<a class="citation" href="#cybenko1989approximation"><span style="vertical-align: super">29</span></a>. Deep neural networks, in contrast, allow for feature re-use that is exponential in the number of layers, which makes deep networks more expressive and more statistically efficient to train. <a class="citation" href="#haastad1991power"><span style="vertical-align: super">35,21</span></a> </span>. In this light, a typical supervised neural network architecture of depth \(k\), for example, can arguably be best understood as a representation learning architecture of depth \(k-1\) followed by a simple linear or logistic regression.</p>
<p>Representations learned by neural networks have a number of desirable properties. First, neural representations are low-dimensional, distributed, and hierarchically organized, as described above. Neural networks have the ability to learn parameterized mappings that are strongly nonlinear but can still be used to directly compute embeddings for new data points. Yoshuo Bengio and others have extensively argued that neural networks have a higher capacity for generalization versus other well-established ML methods such as kernels <a class="citation" href="#bengio2005non"><span style="vertical-align: super">36,37</span></a> and decision trees <a class="citation" href="#bengio2010decision"><span style="vertical-align: super">38</span></a>, specifically because they avoid an excessively strong inductive bias towards <em>smoothness</em>; in other words, when making a new prediction for some new data point \(x\), deep representation learning methods do not exclusively rely upon the training points that are immediately nearby \(x\) in the original feature space.</p>
<p>Representation learning using neural networks also benefits from being modular, and therefore flexible <label for="flexibility_cost" class="margin-toggle sidenote-number"></label><input type="checkbox" id="flexibility_cost" class="margin-toggle" /><span class="sidenote"> The flexibility of neural networks doesn’t come without a price: In addition to obvious concerns about highly parameterized models and overfitting<a class="citation" href="#friedman2001elements"><span style="vertical-align: super">39</span></a>, for example, the ease of implementing complicated DL architectures has arguably produced a research culture focused on ever-larger – and more costly<a class="citation" href="#lacoste2019quantifying"><span style="vertical-align: super">40</span></a> – models that are often poorly characterised and very difficult to reproduce. <a class="citation" href="#lipton2018troubling"><span style="vertical-align: super">41</span></a> </span> and extensible to design. For example, given two neural architectures that each create a distributed representation of a unique data modality, these can be straightforwardly combined into a single, fused architecture that creates a composite multi-modal representation (e.g. combining audio embeddings and visual embeddings into composite video embeddings<a class="citation" href="#ngiam2011multimodal"><span style="vertical-align: super">42</span></a>). Such an approach is leveraged in Chapter 2. Another example of the power afforded by the modularity of neural architectures are <em>Generative Adversarial Networks</em> (GANs) <a class="citation" href="#goodfellow2014generative"><span style="vertical-align: super">43</span></a> , which learn to generate richly structured data by pitting a data-simulating ‘generator’ model against a jointly-trained ‘discrimator’ model that is optimized to distinguish real from generated data. In Supplemental Chapter 1, I demonstrate this approach using a GAN trained to simulate hip radiographs.</p>
<p>Taken together, neural architectures can be designed to expressively implement a broad array of inductive biases, while still allowing the network parameters to search over millions of compatible functions.</p>
<h3 id="-infusing-domain-knowledge-into-neural-representations"><a name="Knowledge"> Infusing Domain Knowledge into Neural Representations</a></h3>
<p>Neural networks have largely absolved the contemporary researcher of the need to hand-engineer features, but this reality has not eliminated the role of external knowledge in designing our models and their inductive biases. In this section, I compare and contrast various approaches to explicitly and implicitly infuse domain knowledge into neural representations.</p>
<p>The first paradigm involves the design of layers and architectures that align the representational capacity of the network with our prior knowledge of the problem domain. For instance, if we know that the data we provide have a particular property (e.g. unordered features), we can enforce corresponding constraints in our architecture (e.g. permutation invariance, as in DeepSet <a class="citation" href="#zhang2019deep"><span style="vertical-align: super">44</span></a> or self-attention<a class="citation" href="#vaswani2017attention"><span style="vertical-align: super">45</span></a> without position encodings). This is an example of a relational inductive bias <label for="rel_ind_bias" class="margin-toggle">⊕</label><input type="checkbox" id="rel_ind_bias" class="margin-toggle" checked="" /><span class="marginnote"><img src="/assets/thesis_images/relational.png" /></span> <a class="citation" href="#battaglia2018relational"><span style="vertical-align: super">20</span></a>. Relatedly, we can manually wire the network in a manner that corresponds with our prior understanding of relationships between variables. Peng et al <a class="citation" href="#peng2019combining"><span style="vertical-align: super">46</span></a> adopted this approach by building a feed forward neural network for single cell RNA-Seq data in which the input neurons for each gene were wired according to the Gene Ontology <a class="citation" href="#ashburner2000gene"><span style="vertical-align: super">47</span></a>; this approach strictly weakens the capacity of the network, but may be useful if we have a strong prior that particular relationships would be confounding, for example. An alternative means to a similar end is to perform graph convolutions over edges that reflect domain knowledge <a class="citation" href="#mcdermott2019deep"><span style="vertical-align: super">48</span></a>.</p>
<p>Another explicit paradigm for infusing knowledge into neural networks is to augment the architecture with the ability to query external information. For example, models can be augmented with knowledge graphs in the form of fact triples, which they can query using an attention mechanism <label for="attention" class="margin-toggle">⊕</label><input type="checkbox" id="attention" class="margin-toggle" checked="" /><span class="marginnote"><img src="/assets/thesis_images/attention.png" /></span> <a class="citation" href="#annervaz2018learning"><span style="vertical-align: super">49,50</span></a>. More generally, attention can be used to allow modules to incorporate relevant information from embeddings of any knowledge source or data modality. For example, <a class="citation" href="#xu2015show"><span style="vertical-align: super">51</span></a> introduced an architecture in which a language model attends to images to generate image captions. Self-attention, or intra-attention, is an attention mechanism that allows for relating different positions within a single sequence <a class="citation" href="#cheng2016long"><span style="vertical-align: super">52,45</span></a>, image <a class="citation" href="#parmar2018image"><span style="vertical-align: super">53</span></a>, or other instance of input data; this allows representations to better share and synthesis information across features.</p>
<p>Transfer learning<a class="citation" href="#yang200610"><span style="vertical-align: super">54,55</span></a> provides a family of methods to infuse knowledge into a learning algorithm that has been gained from a previous learning task. This is related to, but distinct from <em>multi-task learning</em>, which seeks to learn several tasks simultaneously under the premise that performance and efficiency can be improved by sharing knowledge between the tasks during learning. While there are many forms of transfer learning <label for="transfer" class="margin-toggle">⊕</label><input type="checkbox" id="transfer" class="margin-toggle" checked="" /><span class="marginnote"><img src="/assets/thesis_images/transfer.png" /></span> <a class="citation" href="#zhuang2019comprehensive"><span style="vertical-align: super">56</span></a>, the canonical form in the setting of deep learning is <em>pretraining</em>. In pretraining, model weights from a trained neural network are used to initialize some subset of the weights in another network; these parameters can then be either frozen or “fine-tuned” with further training on a the target task. Initial transfer learning experiments were conducted using unsupervised pretraining with autoencoders
<label for="autoencoders" class="margin-toggle sidenote-number"></label><input type="checkbox" id="autoencoders" class="margin-toggle" /><span class="sidenote"> Autoencoders <a class="citation" href="#hinton2006reducing"><span style="vertical-align: super">57</span></a> learn representations guided by the inductive bias that a good representation should be able to be used to reconstruct its raw input. They are an example of an ‘encoder-decoder’ architecture, which consist of an encoder, which take the raw input and use a series of layers to embed it into a low-dimensional space, and a decoder, which takes an embedding from the encoder and tries to construct raw data; this combined architecture is then trained in an end-to-end fashion. When the decoder is trained specifically to reconstruct the exact same input passed into the encoder, this is called an autoencoder. (Alternatively, decoders can be trained to produce related data, a prominent example being Seq2seq models that can, for example, encode a sentence from one language and decode it into another<a class="citation" href="#sutskever2014sequence"><span style="vertical-align: super">58</span></a>.) <em>Variational autoencoders</em> <a class="citation" href="#kingma2013auto"><span style="vertical-align: super">59</span></a> combine autoencoding with stochastic variational inference to build generative models that can be use for sampling entirely new data. </span> before transferring weights to a supervised model for a downstream task; this technique is an example of inductive <em>semi-supervised learning</em><a class="citation" href="#van2020survey"><span style="vertical-align: super">60</span></a>. In the past decade, supervised pretraining has become very popular, with the quintessential example being the initialization of an image processing architecture with all but the final layer of a model trained on the ImageNet dataset <a class="citation" href="#deng2009imagenet"><span style="vertical-align: super">61</span></a>. More recently, <em>self-supervised</em> transfer learning has received significant attention, particularly in natural language processing. In self-supervised learning, subsets of a data or feature set are masked, and neural networks are trained to predict them from remaining features. The resulting representations can then be used directly for downstream tasks, such as information retrieval, or be leveraged for transfer learning. Word embeddings <a class="citation" href="#mikolov2013distributed"><span style="vertical-align: super">33</span></a> are arguably the first widespread instance of self-supervised transfer learning, with more recent methods including language model pretraining <a class="citation" href="#howard2018universal"><span style="vertical-align: super">62,63,45</span></a>.</p>
<p><em>Contrastive learning</em> methods <label for="contrastive" class="margin-toggle">⊕</label><input type="checkbox" id="contrastive" class="margin-toggle" checked="" /><span class="marginnote"><img src="/assets/thesis_images/contrastive.png" /></span> learn representations by taking in small sets of examples and optimize embeddings to bring similar data together while driving dissimilar data apart. This is a form of <em>metric learning</em>. Early methods in this field include Siamese (missing reference) and Triplet networks <a class="citation" href="#hoffer2015deep"><span style="vertical-align: super">64</span></a>, which were initially developed to learn deep representations of images. Recent analyses suggest that many methods developed in the past several years have failed to advance beyond triplet networks <a class="citation" href="#musgrave2020metric"><span style="vertical-align: super">65</span></a>. Contrastive methods have been used in the pretraining step of a semi-supervised framework to achieve the current state-of-the-art in limited data image classification <a class="citation" href="#chen2020simple"><span style="vertical-align: super">66</span></a>. In addition, contrastive optimization can be leveraged using multi-modal data to create aligned representations across modalities <a class="citation" href="#deng2018triplet"><span style="vertical-align: super">67</span></a>.</p>
<p>The methods described in this section can be described as a spectrum. Hand-engineered architectures are based on strong and specific prior assumptions about the problem domain, and are used to fundamentally alter the <em>representational capacity</em> of the network. In contrast, self-supervised and contrastive architectures make very minimal specific assumptions about the problem domain, and do nothing to alter the representational capacity of the algorithm; instead their innovation lies in devising <em>training schemes and loss functions</em> that will guide the network to learn underlying relationships and find a generalizable solution. In between these two extremes, augmenting networks with access to external knowledge through attention mechanisms often make the assumption that specific knowledge will be helpful, but allow the model to determine for itself which knowledge to employ. Transfer learning makes the assumption that other specific learning tasks will provide useful knowledge and experience for the target domain, but makes minimal assumptions about precisely what this knowledge would be. Despite (arguably significant) philosophical differences, these and yet other paradigms are not mutually exclusive, and share the common goal of improving generalization and data efficiency by introducing richer domain understanding into the neural networks.</p>
<p><br /></p>
<figure><img src="/../assets/thesis_images/knowledge_paradigms.png" /><figcaption class="maincolumn-figure">Example paradigms for infusing domain knowledge into learned representations.</figcaption></figure>
<p>Finally, while this section – and indeed several chapters of this thesis – focuses on the design of neural architectures and training curricula, the role of domain knowledge is truly inescapable when it comes to the evaluation of deployable systems. Accordingly, the topic of deployment analysis will also be a major theme of this thesis.</p>
<p>The current draft of my full PhD thesis can be found <a href="https://www.dropbox.com/s/slw2vkxajgwgp6i/PhD_Thesis.pdf?dl=0">here</a>.</p>
<h3 id="bibliography">Bibliography</h3>
<ol class="bibliography"><li><span id="chami2020machine">1.Chami, I., Abu-El-Haija, S., Perozzi, B., Ré, C., and Murphy, K. (2020). Machine Learning on Graphs: A Model and Comprehensive Taxonomy.</span></li>
<li><span id="empiricus1933outlines">2.Empiricus, S., and Bury, R.G. (1933). Outlines of pyrrhonism. Eng. trans, by RG Bury (Cambridge Mass.: Harvard UP, 1976), II <i>20</i>, 90–1.</span></li>
<li><span id="perrett1984problem">3.Perrett, R.W. (1984). The problem of induction in Indian philosophy. Philosophy East and West <i>34</i>, 161–174.</span></li>
<li><span id="humeTreatise">4.Hume, D. (1739). A treatise of human nature (Oxford University Press).</span></li>
<li><span id="humeEnquiry">5.Hume, D. (1748). An Enquiry Concerning Human Understanding (Oxford University Press).</span></li>
<li><span id="garrett2011reason">6.Garrett, D., and Millican, P.J.R. (2011). Reason, Induction, and Causation in Hume’s Philosophy (Institute for Advanced Studies in the Humanities, The University of Edinburgh).</span></li>
<li><span id="goodman1955fact">7.Goodman, N. (1955). Fact, fiction, and forecast (Harvard University Press).</span></li>
<li><span id="popper2014conjectures">8.Popper, K. (2014). Conjectures and refutations: The growth of scientific knowledge (routledge).</span></li>
<li><span id="hilborn1997ecological">9.Hilborn, R., and Mangel, M. (1997). The ecological detective: confronting models with data (Princeton University Press).</span></li>
<li><span id="mayo1996ducks">10.Mayo, D.G. (1996). Ducks, rabbits, and normal science: Recasting the Kuhn’s-eye view of Popper’s demarcation of science. The British Journal for the Philosophy of Science <i>47</i>, 271–290.</span></li>
<li><span id="queen2002experimental">11.Queen, J.P., Quinn, G.P., and Keough, M.J. (2002). Experimental design and data analysis for biologists (Cambridge University Press).</span></li>
<li><span id="mitchell1997machine">12.Mitchell, T.M., and others (1997). Machine learning.</span></li>
<li><span id="tibshirani1996regression">13.Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological) <i>58</i>, 267–288.</span></li>
<li><span id="srivastava2014dropout">14.Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research <i>15</i>, 1929–1958.</span></li>
<li><span id="prechelt1998early">15.Prechelt, L. (1998). Early stopping-but when? In Neural Networks: Tricks of the trade (Springer), pp. 55–69.</span></li>
<li><span id="cortes1995support">16.Cortes, C., and Vapnik, V. (1995). Support-vector networks. Machine learning <i>20</i>, 273–297.</span></li>
<li><span id="girosi1995regularization">17.Girosi, F., Jones, M., and Poggio, T. (1995). Regularization theory and neural networks architectures. Neural computation <i>7</i>, 219–269.</span></li>
<li><span id="mitchell1980need">18.Mitchell, T.M. (1980). The need for biases in learning generalizations (Department of Computer Science, Laboratory for Computer Science Research …).</span></li>
<li><span id="muggleton1991inductive">19.Muggleton, S. (1991). Inductive logic programming. New generation computing <i>8</i>, 295–318.</span></li>
<li><span id="battaglia2018relational">20.Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., et al. (2018). Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261.</span></li>
<li><span id="bengio2013representation">21.Bengio, Y., Courville, A., and Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence <i>35</i>, 1798–1828.</span></li>
<li><span id="goodfellow2016representation">22.Goodfellow, I., Bengio, Y., and Courville, A. (2016). Representation learning. Deep Learning, 517–548.</span></li>
<li><span id="bordes2013translating">23.Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., and Yakhnenko, O. (2013). Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787–2795.</span></li>
<li><span id="pearson1901liii">24.Pearson, K. (1901). LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science <i>2</i>, 559–572.</span></li>
<li><span id="forgy1965cluster">25.Forgy, E.W. (1965). Cluster analysis of multivariate data: efficiency versus interpretability of classifications. biometrics <i>21</i>, 768–769.</span></li>
<li><span id="jutten1991blind">26.Jutten, C., and Herault, J. (1991). Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal processing <i>24</i>, 1–10.</span></li>
<li><span id="tenenbaum2000global">27.Tenenbaum, J.B., De Silva, V., and Langford, J.C. (2000). A global geometric framework for nonlinear dimensionality reduction. science <i>290</i>, 2319–2323.</span></li>
<li><span id="roweis2000nonlinear">28.Roweis, S.T., and Saul, L.K. (2000). Nonlinear dimensionality reduction by locally linear embedding. science <i>290</i>, 2323–2326.</span></li>
<li><span id="cybenko1989approximation">29.Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems <i>2</i>, 303–314.</span></li>
<li><span id="dahl2010phone">30.Dahl, G., Ranzato, M.A., Mohamed, A.-rahman, and Hinton, G.E. (2010). Phone recognition with the mean-covariance restricted Boltzmann machine. In Advances in neural information processing systems, pp. 469–477.</span></li>
<li><span id="hinton2006fast">31.Hinton, G.E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural computation <i>18</i>, 1527–1554.</span></li>
<li><span id="mikolov2011empirical">32.Mikolov, T., Deoras, A., Kombrink, S., Burget, L., and Černockỳ, J. (2011). Empirical evaluation and combination of advanced language modeling techniques. In Twelfth Annual Conference of the International Speech Communication Association.</span></li>
<li><span id="mikolov2013distributed">33.Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119.</span></li>
<li><span id="mikolov2013efficient">34.Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.</span></li>
<li><span id="haastad1991power">35.Håstad, J., and Goldmann, M. (1991). On the power of small-depth threshold circuits. Computational Complexity <i>1</i>, 113–129.</span></li>
<li><span id="bengio2005non">36.Bengio, Y., and Monperrus, M. (2005). Non-local manifold tangent learning. In Advances in Neural Information Processing Systems, pp. 129–136.</span></li>
<li><span id="bengio2006curse">37.Bengio, Y., Delalleau, O., and Roux, N.L. (2006). The curse of highly variable functions for local kernel machines. In Advances in neural information processing systems, pp. 107–114.</span></li>
<li><span id="bengio2010decision">38.Bengio, Y., Delalleau, O., and Simard, C. (2010). Decision trees do not generalize to new variations. Computational Intelligence <i>26</i>, 449–467.</span></li>
<li><span id="friedman2001elements">39.Friedman, J., Hastie, T., and Tibshirani, R. (2001). The elements of statistical learning (Springer series in statistics New York).</span></li>
<li><span id="lacoste2019quantifying">40.Lacoste, A., Luccioni, A., Schmidt, V., and Dandres, T. (2019). Quantifying the Carbon Emissions of Machine Learning. arXiv preprint arXiv:1910.09700.</span></li>
<li><span id="lipton2018troubling">41.Lipton, Z.C., and Steinhardt, J. (2018). Troubling Trends in Machine Learning Scholarship.</span></li>
<li><span id="ngiam2011multimodal">42.Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., and Ng, A.Y. (2011). Multimodal deep learning.</span></li>
<li><span id="goodfellow2014generative">43.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680.</span></li>
<li><span id="zhang2019deep">44.Zhang, Y., Hare, J., and Prugel-Bennett, A. (2019). Deep set prediction networks. In Advances in Neural Information Processing Systems, pp. 3207–3217.</span></li>
<li><span id="vaswani2017attention">45.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008.</span></li>
<li><span id="peng2019combining">46.Peng, J., Wang, X., and Shang, X. (2019). Combining gene ontology with deep neural networks to enhance the clustering of single cell RNA-Seq data. BMC bioinformatics <i>20</i>, 284.</span></li>
<li><span id="ashburner2000gene">47.Ashburner, M., Ball, C.A., Blake, J.A., Botstein, D., Butler, H., Cherry, J.M., Davis, A.P., Dolinski, K., Dwight, S.S., Eppig, J.T., et al. (2000). Gene ontology: tool for the unification of biology. Nature genetics <i>25</i>, 25–29.</span></li>
<li><span id="mcdermott2019deep">48.McDermott, M., Wang, J., Zhao, W.N., Sheridan, S.D., Szolovits, P., Kohane, I., Haggarty, S.J., and Perlis, R.H. (2019). Deep Learning Benchmarks on L1000 Gene Expression Data. IEEE/ACM transactions on computational biology and bioinformatics.</span></li>
<li><span id="annervaz2018learning">49.Annervaz, K.M., Chowdhury, S.B.R., and Dukkipati, A. (2018). Learning beyond datasets: Knowledge graph augmented neural networks for natural language processing. arXiv preprint arXiv:1802.05930.</span></li>
<li><span id="kishimoto2018knowledge">50.Kishimoto, Y., Murawaki, Y., and Kurohashi, S. (2018). A knowledge-augmented neural network model for implicit discourse relation classification. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 584–595.</span></li>
<li><span id="xu2015show">51.Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pp. 2048–2057.</span></li>
<li><span id="cheng2016long">52.Cheng, J., Dong, L., and Lapata, M. (2016). Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733.</span></li>
<li><span id="parmar2018image">53.Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, Ł., Shazeer, N., Ku, A., and Tran, D. (2018). Image transformer. arXiv preprint arXiv:1802.05751.</span></li>
<li><span id="yang200610">54.Yang, Q., and Wu, X. (2006). 10 challenging problems in data mining research. International Journal of Information Technology & Decision Making <i>5</i>, 597–604.</span></li>
<li><span id="pan2009survey">55.Pan, S.J., and Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on knowledge and data engineering <i>22</i>, 1345–1359.</span></li>
<li><span id="zhuang2019comprehensive">56.Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., and He, Q. (2019). A Comprehensive Survey on Transfer Learning. arXiv preprint arXiv:1911.02685.</span></li>
<li><span id="hinton2006reducing">57.Hinton, G.E., and Salakhutdinov, R.R. (2006). Reducing the dimensionality of data with neural networks. science <i>313</i>, 504–507.</span></li>
<li><span id="sutskever2014sequence">58.Sutskever, I., Vinyals, O., and Le, Q.V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112.</span></li>
<li><span id="kingma2013auto">59.Kingma, D.P., and Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.</span></li>
<li><span id="van2020survey">60.Van Engelen, J.E., and Hoos, H.H. (2020). A survey on semi-supervised learning. Machine Learning <i>109</i>, 373–440.</span></li>
<li><span id="deng2009imagenet">61.Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (Ieee), pp. 248–255.</span></li>
<li><span id="howard2018universal">62.Howard, J., and Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.</span></li>
<li><span id="devlin2018bert">63.Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.</span></li>
<li><span id="hoffer2015deep">64.Hoffer, E., and Ailon, N. (2015). Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition (Springer), pp. 84–92.</span></li>
<li><span id="musgrave2020metric">65.Musgrave, K., Belongie, S., and Lim, S.-N. (2020). A Metric Learning Reality Check. arXiv preprint arXiv:2003.08505.</span></li>
<li><span id="chen2020simple">66.Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. (2020). A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709.</span></li>
<li><span id="deng2018triplet">67.Deng, C., Chen, Z., Liu, X., Gao, X., and Tao, D. (2018). Triplet-based deep hashing network for cross-modal retrieval. IEEE Transactions on Image Processing <i>27</i>, 3893–3903.</span></li></ol>
<style>
.bibliography {
font-size: small;
width: 50%;
}
.bibliography li{
margin: 0px;
}
</style>
<script type="text/javascript">
$(document).ready(function(){
$("input:checkbox").prop("checked", "true");
});
</script>
Mon, 22 Jun 2020 06:00:00 -0400
http://sgfin.github.io/2020/06/22/Induction-Intro/
http://sgfin.github.io/2020/06/22/Induction-Intro/Comments on ML "versus" statistics<p><strong>Why am I writing this?</strong></p>
<p>Over the last few years, I’ve observed many vigorous debates about “machine learning versus statistics.” Often, these are sparked by some paper/blog post/press release that either (a) involves some use of logistic regression (or some other type of GLM) being described as machine learning, or (b) performs a meta-analysis attempting to pit the fields against each other.</p>
<p>I have allowed myself to get pulled down this rabbit hole far too many times, wasting hours of my time in fruitless debate. As such, I have decided to write this post as a way to inoculate myself against the urge to enter future discussions. The first two sections explain why I consider most ML “versus” Stats debates to be fundamentally flawed, even in their very premise. The following two sections explain why I <em>do</em> validate where people are coming from in having these debates, but still think they (the debates!) are a colossal waste of time.</p>
<p>As time goes on, I might even write a bot to post this on relevant twitter threads. If I do, I will <em>intentionally</em> code this bot using logistic regression and call it machine learning, just to maximize peskiness.</p>
<p><strong>Outline</strong></p>
<div style="padding-top: 10px; font-size:large">
<a href="#Sect1">-Neglected historical context: The term "machine learning" was not coined to contrast with statistics, but to contrast the field with competing paradigms for building intelligent computer systems.</a><br /><br />
<a href="#Sect2">-Arguments about who "owns" regression miss the point.</a><br /><br />
<a href="#Sect3">-Distinctions in goals have yielded a divergence in methods and cultures, which explains shifting connotations of the term "machine learning."</a><br /><br />
<a href="#Sect4">-Isn't this whole "debate" a massive waste of time?</a>
</div>
<h3 id="-neglected-historical-context-the-term-machine-learning-was-not-coined-to-contrast-with-statistics-but-to-contrast-the-field-with-competing-paradigms-for-building-intelligent-computer-systems"><a name="Sect1"></a> Neglected historical context: The term “machine learning” was <em>not</em> coined to contrast with statistics, but to contrast the field with competing paradigms for building intelligent computer systems.</h3>
<p>Before getting to Machine Learning (ML), a couple paragraphs on Artificial Intelligence (AI). These days, many people – including me – reflexively wince when they hear the term “AI,” because it is (a) used by slimey buzzword peddlers to such an extent that it is now nearly synonymous with “snakeoil,” (b) overloaded with connotations of sentient killer robots, and (c) almost exclusively used to refer to machine learning, anyway. This is all quite unfortunate. However, try to set that aside for just <em>one</em> paragraph.</p>
<p>Engineers have dreamed of building something “smart” for thousands of years, but the term “artificial intelligence” itself was coined by John McCarthy in preparation for the famous “Dartmouth Conference” of 1956. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines,” and that’s not too bad for a pithy one-liner. Importantly for this discussion, McCarthy was able to convince his colleagues to adopt this term at Dartmouth in large part because it was <em>vague</em>. At that point in time, computer scientists who were trying to crack intelligence were focused <em>not</em> on data-driven methods, but on things like automata theory, formal logic, and cybernetics. McCarthy wanted to create a term that would capture all of these paradigms (and other ones yet to come) rather than favoring any specific approach.</p>
<p>It was with this context that Arthur Samuel (one of the attendees at the Dartmouth Conference) coined the term “Machine Learning” in 1959, which he defined as:</p>
<blockquote>
<p>Field of study that gives computers the ability to learn without being explicitly programmed.</p>
</blockquote>
<p>Samuels and his colleagues wanted to help computers becomes “smart” by equipping them with the capacity to recognize patterns and iteratively improve over time. While this may seem like an obvious approach today, it took decades before this became the dominant mode of AI research (as opposed to, say, building systems that exhibit “intelligence” by applying propositional logic over curated knowledge graphs).</p>
<p>In other words, machine learning was coined to describe a design process for computers that <em>leverages</em> statistical methods to improve performance over time. The term was created by computer scientists, for computer scientists, and designed as a contrast with <em>non-data-driven approaches</em> to building smart machines. It was <em>not</em> designed to be a contrast with <em>statistics</em>, which is focused on using (often overlapping) data-driven methods to inform humans.</p>
<p>Another extremely widely-referenced definition of ML comes from Tom M. Mitchell’s 1997 textbook, which said:</p>
<blockquote>
<p>The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience,</p>
</blockquote>
<p>and offered the accompanying semi-formal definition:</p>
<blockquote>
<p>A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.</p>
</blockquote>
<p>This is all very much in accordance with Arthur Samuel’s definition, and I could pull other more recent definitions with similar verbiage. Another passage from Mitchell that I think gets less circulation than it deserves, however, is the following (taken with a little reformatting from a 2006 article called <a href="http://www.cs.cmu.edu/~tom/pubs/MachineLearning.pdf">“The Discipline of Machine Learning</a>”):</p>
<blockquote>
<p>Machine Learning is a natural outgrowth of the intersection of Computer Science and Statistics. We might say the defining question of Computer Science is “How can we build machines that solve problems, and which problems are inherently tractable/intractable?” The question that largely defines Statistics is “What can be inferred from data plus a set of modeling assumptions, with what reliability?”</p>
<p>The defining question for Machine Learning builds on both, but it is a distinct question. Whereas Computer Science has focused primarily on how to manually program computers, Machine Learning focuses on the question of how to get computers to program themselves (from experience plus some initial structure). Whereas Statistics has focused primarily on what conclusions can be inferred from data, Machine Learning incorporates additional questions about what computational architectures and algorithms can be used to most effectively capture, store, index, retrieve and merge these data, how multiple learning subtasks can be orchestrated in a larger system, and questions of computational tractability.</p>
</blockquote>
<h3 id="-arguments-about-who-owns-regression-miss-the-point"><a name="Sect2"></a> Arguments about who “owns” regression miss the point</h3>
<p>Given the history just described, I must admit that I’ve been frustrated at times by the authoritarian tone with which many have tried to enforce a false dichotomization between statistics and ML methods. In particular, there appears to be a strange fixation by specific online personalities on insisting that regression-powered applications must <em>not</em> be described as ML. In reading some of these discussions, one might even come away thinking that there is a conspiracy at play to annex regression away from statistics.</p>
<p>To me, this is only slightly less silly than trying to stir up a turf war to target anyone who uses logistic regression and calls it “econometrics.” For at least 60 years, “machine learning” has been about building the best learning computers that we can, not some weird methods competition with statistics. This is why, when it comes to teaching “ML methods,” almost every introductory machine learning textbook or course that I’ve ever seen – Mitchell, Murphy, Bishop, Ng, etc. – spends much of its efforts on teaching GLMs and their variants. It’s also why it’s <em>perfectly sensible</em> for specialized textbooks to include plots like <a href="https://images.app.goo.gl/WxjjxjUXSxQw75To7">this one</a>, which routinely makes the rounds to much pearl-clutching. Would I expect such a plot to be useful to statisticians? Or course not! But they make sense in context of the ML/AI fields, which are concerned with different ways to make programs act “smart”. And to put it bluntly for any [c]rank-y colleagues in statistics: <em>You don’t get to decide</em> which taxonomies another field finds useful in framing its own problems and history.</p>
<p>The great irony with the whole recurring snafoo around who “owns” regression – and all of its variants – is that it simultaneously undersells <em>both</em> machine learning <em>and</em> statistics, for many reasons that include the following four:</p>
<ul>
<li>First, it minimizes – or even defines away – the core role that classic statistical methods <em>continue</em> to play in efforts to build computer programs that learn.</li>
<li>Second, it ignores the impact that ML has had on statistics, when in reality AI and CS have been a <em>massive</em> boon to statistics research. This includes the generation of new statistical paradigms (e.g. Judea Pearl and others’ work on causality, now one of many booming subareas of stats that came from ML) and a wide array of algorithmic and computational tools that have enabled the rise of statistical computing.</li>
<li>Third, a false dichotomy between ML and stats minimizes the wide – and critical to modeling decisions – variation <em>within</em> each purported class. It’s also silly. For example, take a simple logistic regression model implemented in pytorch. Now consider (a) adding 1000 polynomial interaction terms between all the features and a ridge penalty, and (b) adding a second fully connected layer. The dichotomous approach to ML vs stats would say that (b) fundamentally alters the entire class of model from stats to ML, and minimizes the arguably larger impact on modeling induced by (a). By the same token, I’m always shocked when I see meta-analyses that claim they’re going to compare the accuracy of ML “versus” statistics, and then use embarrassingly horrible and out-of-date ML models (like a single decision tree) or bad statistical practices. In short, the spectrum is so broad (and the execution so essential) within each of these purportedly distinct methodological camps that most statements about the whole collections are minimally helpful. It’s all about picking the right tool for a specific job, and then taking the time and effort to use that tool properly!</li>
<li>Fourth, the above dispute also ignores the fact that many top researchers, publication venues, and papers in stats or ML are fully-fledged citizens of both communities.</li>
</ul>
<p>In my opinion, the careers of Trevor Hastie and Rob Tibshirani highlight the best of what happens when statisticians interact richly with machine learning researchers. Rather than getting caught up in drawing methodological border lines, they have taken tools developed first by machine learning researchers and helped formally situate them <em>within</em> the world of statistics proper. In this light, I enjoy their frequent use of the term “statistical learning” (as in the title of their textbook), which I think nicely emphasizes the fact that their <em>goals</em> are those of statistics, even if many of the <em>methods</em> in the book have been developed by and for people in ML. (I’ll also point out, a bit immaturely, that I’ve never heard a machine learning researcher complain that Hastie and Tibshirani are trying to annex their methods by not using the phrase “machine learning” when describing neural networks, tree-based methods, etc.) Of course, all of the above is to say nothing of the new methods Hastie and Tibs have generated themselves, which have impacted the daily work of both statisticians and machine learning researchers.</p>
<p>All the above being said, I do appreciate that perfectly reasonable people have come to think of ML as a disjoint set of <em>methods</em> from statistics. The following sections elaborate on why I think this has happened, and what I think this means as a takeaway for the overall discussion.</p>
<h3 id="-distinctions-in-goals-have-yielded-a-divergence-in-methods-and-cultures-which-explains-shifting-connotations-of-the-term-machine-learning--disconnects-in-language-doom-many-debates-to-futility-before-they-begin"><a name="Sect3"></a> Distinctions in goals have yielded a divergence in methods and cultures, which explains shifting connotations of the term “machine learning.” Disconnects in language doom many “debates” to futility before they begin.</h3>
<p>As stated above, the field of machine learning research was founded as computer scientists sought to build and understand intelligent computer systems, and this continues to be the case today. Major ML applications include things like speech recognition, computer vision, robotics/autonomous systems, computational advertising (sigh…), surveillance (sigh…), chat-bots (sigh…), etc. In seeking to solve these problems, machine learning researchers will almost <em>always</em> start by first trying classical statistical methods, including the relevant simple GLM (in fact, this is often considered a mandatory baseline for publication in many applied ML areas). Hence my whole discussion about ML not being predicated on a specific method. <em>However</em>, computer scientists have, of course, also significantly added to this toolkit over the years through the development of additional methods.</p>
<p>As with evolution in any other context, the growing phylogeny of statistical methods used for machine learning have been shaped by selective pressures. Compared to statisticians, machine learning researchers typically care much less about <em>understanding</em> any specific action taken by their algorithms (though it is certainly important, and increasingly a bigger priority). Rather, they usually care most about minimizing <em>model errors</em> on held-out data. As such, it makes sense that methods developed by ML researchers are typically more flexible even at the expense of interpretability, for example. <a href="https://projecteuclid.org/euclid.ss/1009213726">Leo Breiman</a> and others have written about how these cultures have informed methods development, such as random forests. This often-divergent evolution has made it easy to draw (fuzzy) boundaries between ML and statistics research based entirely on <em>methods</em>. To boot, many statisticians are unaware of the history of ML, and have thus, for years, only ever been exposed to the field by means of the methods it periodically emits. It is thus unsurprising that they would be interested in defining the field in any other terms, even if it is dissapointing.</p>
<p>By the same token, a sharp division based on <em>use</em> (like I advocated for above) is now complicated by the fact that many ML people say they’re doing machine learning even when they’re applying their methods for pure data analysis rather than to drive a computer program. While arguably incorrect in a strict historical sense, I don’t fault people for doing this – probably out of a mixture of habit, cultural affiliation, and/or because it sounds cool.</p>
<p>Taken together, people now use “machine learning” to mean very different things. Sometimes, people use it to mean: “I’m using a statistical method to make my program learn” or “I’m developing a data analysis that I hope to deploy in an automated system.” Other times, they mean: “I’m using a method – perhaps for a statistical data analysis – that was originally developed by the machine learning community, like random forests.” Still other times (maybe most of the time…?), they mean: “I consider myself a machine learning researcher, I’m working with data, and I can call this work whatever I darn well please.”</p>
<p>These different uses of the term aren’t really surprising or problematic, because this is simply how language evolves. But it does make it extremely frustrating when a hoard of data scientists (oh no, another hypey term! I use it here as union of ML and statistics) collectively try to debate whether or not a specific project can be branded as ML or must be branded “just statistics.” Usually, when this happens, people enter the discussion with wildly different assumptions – poorly defined, and seldom articulated – about what the words mean in the first place. And then they rarely take the time to understand where others are coming from or what they are actually trying to say. Instead, they typically just talk past each other, louder instead of clearer.</p>
<h3 id="-isnt-this-whole-debate-a-massive-waste-of-time"><a name="Sect4"></a> Isn’t this whole “debate” a massive waste of time?</h3>
<p>Finally, let’s lay our cards out on the table w.r.t. a few real problems: There are many machine learning researchers (or at the very least, machine learning hobbyists), who exhibit an inadequate understanding of statistics for people who work with data for a living. At times, I <em>am</em> such a machine learning researcher! (Though I’d wager that many professional statisticians sometimes feel the same way, too.) Relatedly, but more seriously, ML research moves so fast, and is sometimes so culturally disconnected from the field of statistics, that I think that it is all-too-common for even prominent ML researchers to re-discover or re-invent parts of statistics. That’s a problem and a waste. Finally, there is a massive brand dilution in ML, because a large third-party population of applied researchers have essentially co-opted the term “machine learning,” applying it to papers just to make them sound fancy, even when in reality they are doing machine learning <em>neither</em> in the sense of automated system building <em>nor</em> in the sense of using new methods that came from ML.</p>
<p>I feel that the solution to all of these problems is to increase recognition that most of ML’s data methods actually live <em>within</em> statistics. Rather than doubling down on a false partition between the two fields, our priority needs to be the cultivation of a robust understanding of statistical principles, whether they are being used for data analysis or for programming intelligent systems. Endless debates about what to <em>call</em> a lot of this work end up distracting people from essential conversations about how to <em>carry out</em> good work by matching the right <strong>specific</strong> tool to the right problem. If anything, a fixation on a false dichotomy between stats and ML methods probably drives many people <em>further</em> into the habit of using unnecessarily complex methods, just to feel (whether for pride or for money) like they are doing “real ML” (whatever on earth that means). It also directly feeds the issue that causes people to call their work ML just for the sake of sounding methodological fancy.</p>
<p>Finally, this golden age of statistical computing is driving these two fields closer than ever. ML research, of course, lives within computer science, and the modern statistician is increasingly dependent upon the algorithms and software stack that have been pioneered by CS departments for decades. Modern statisticians – especially in fields like computational biology – are also increasingly finding use for methods pioneered by ML researchers for, say, regression in high dimensions or at large scale. On the flip side, the ML community is becoming increasingly concerned with topics like interpretability, fairness, certifiable robustness, etc., which is leading many researchers’ priorities to align more directly with the traditional values of statistics. At the very least, even when a system is deployed using the most convoluted architectures possible, it’s pretty universally recognized that classical statistics is necessary to measure and evaluate performance.</p>
<h3 id="in-summary">In summary:</h3>
<p>The whole debate is misguided, the terms are overloaded, the methodological dichotomy is false, ML people care (and increasingly so) about statistics, stats people are increasingly dependent upon CS and ML, and there is no regression annexation conspiracy. There’s a lot of hype out there right now, but that doesn’t change the fact that, <em>often</em>, when people use different terminology than you, that’s because they come from a different background or have different goals in mind, not because they are stupid or dishonest. Let’s just all be friends and strive to do good work together and learn from each other. Kumbaya.</p>
Fri, 31 Jan 2020 05:00:00 -0500
http://sgfin.github.io/2020/01/31/Comments-ML-Statistics/
http://sgfin.github.io/2020/01/31/Comments-ML-Statistics/All the DAGs from Hernan and Robins' Causal Inference Book<p>This is my preliminary attempt to organize and present all the DAGs from Miguel Hernan and Jamie Robin’s excellent <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/">Causal Inference Book</a>. So far, I’ve only done Part I.</p>
<p>Why create this page?</p>
<ul>
<li>First, I love the Causal Inference book, but sometimes I find it easy to lose track of the variables when I read it. Having the variables right alongside the DAG makes it easier for me to remember what’s going on, especially when the book refers back to a DAG from a previous chapter and I don’t want to dig back through the text.</li>
<li>Second, I hope this will be a really fun way to occasionally review the main concepts from the text in the future.</li>
<li>Finally, making this as I read forced me to read the book more carefully.</li>
</ul>
<p>Again, this page is meant to be fairly raw and only contain the DAGs. If you use it, you might also find it useful to open up <a href="https://sgfin.github.io/2019/06/19/Causal-Inference-Book-Glossary-and-Notes/">this page</a>, which is where I have more traditional notes covering the main concepts from the book. But of course, the text itself has no substitute.</p>
<p><strong>Table of Contents</strong>:</p>
<ul id="markdown-toc">
<li><a href="#refresher-visual-rules-of-d-separation" id="markdown-toc-refresher-visual-rules-of-d-separation">Refresher: Visual rules of d-separation.</a></li>
<li><a href="#refresher-backdoor-criterion" id="markdown-toc-refresher-backdoor-criterion">Refresher: Backdoor criterion</a></li>
<li><a href="#basics-of-causal-diagrams-61-65" id="markdown-toc-basics-of-causal-diagrams-61-65">Basics of Causal Diagrams (6.1-6.5)</a></li>
<li><a href="#effect-modification-66" id="markdown-toc-effect-modification-66">Effect Modification (6.6)</a></li>
<li><a href="#confounding-chapter-7" id="markdown-toc-confounding-chapter-7">Confounding (Chapter 7)</a></li>
<li><a href="#selection-bias-chapter-8" id="markdown-toc-selection-bias-chapter-8">Selection Bias (Chapter 8)</a></li>
<li><a href="#measurement-bias-chapter-9" id="markdown-toc-measurement-bias-chapter-9">Measurement Bias (Chapter 9)</a></li>
</ul>
<h3 id="refresher-visual-rules-of-d-separation">Refresher: Visual rules of d-separation.</h3>
<p>Two variables on a DAG are <strong>d-separated</strong> if all paths between them are blocked. The following four rules defined what it means to be “blocked.”</p>
<p>(This is just meant to be a refresher – see the second half of this post or Fine Point 6.1 of the text for more definitions.)</p>
<table>
<thead>
<tr>
<th style="text-align: center">Rule</th>
<th style="text-align: center">Example</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">1. If there are no variables being conditioned on, a path is blocked if and only if two arrowheads on the path collide at some variable on the path.</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_1.png" width="175" /> <br /><br /> \(L \rightarrow A \rightarrow Y\) is open. <br /><br /> \(A \rightarrow Y \leftarrow L\) is blocked at \(Y\)</td>
</tr>
<tr>
<td style="text-align: center">2. Any path that contains a noncollider that has been conditioned on is blocked.</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_5.png" width="175" /> <br /><br /> Conditioning on \(B\) blocks the path from \(A\) to \(Y\).</td>
</tr>
<tr>
<td style="text-align: center">3. A collider that has been conditioned on does not block a path</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_7.png" width="175" /> <br /><br /> The path between \(A\) and \(Y\) is open after conditioning on \(L\).</td>
</tr>
<tr>
<td style="text-align: center">4. A collider that has a descendant that has been conditioned on does not block a path.</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_8.png" width="175" /> <br /><br /> The path between \(A\) and \(Y\) is open after conditioning on \(C\), a descendant of collider \(L\).</td>
</tr>
</tbody>
</table>
<h3 id="refresher-backdoor-criterion">Refresher: Backdoor criterion</h3>
<p>Assuming positivity and consistency, confounding can be eliminated and causal effects are identifiable in the following two settings:</p>
<table>
<thead>
<tr>
<th style="text-align: center">Rule</th>
<th style="text-align: center">Example</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center">1. No common causes of treatment and outcome.</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_2.png" width="150" /> <br /><br /> There are no common causes of treatment and outcome. Hence no backdoor paths need to be blocked. <br /><br /> No confounding; equivalent to a marginally randomized trial.</td>
</tr>
<tr>
<td style="text-align: center">2. Common causes are present, but there are enough measured variables to block all colliders. (i.e. No unmeasured confounding.)</td>
<td style="text-align: center"><img src="/assets/hernan_dags/6_1.png" width="175" /> <br /><br /> Backdoor path through the common cause \(L\) can be blocked by conditioning on measured covariates (in this case, \(L\) itself) that are non-descendants of treatment. <br /><br /> There will be no residual confounding after controlling for \(L\); equivalent to a conditionally randomized trial.</td>
</tr>
</tbody>
</table>
<p>And now we can finally:</p>
<p style="text-align:center;">
<img src="/assets/hernan_dags/all_the_dags.jpg" width="400" />
</p>
<h3 id="basics-of-causal-diagrams-61-65">Basics of Causal Diagrams (6.1-6.5)</h3>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_2.png" width="150" /></td>
<td style="text-align: center"><em>Marginally randomized experiment</em> <br /><br /> <strong>A</strong>: Treatment <br /><br /> <strong>Y</strong>: Outcome</td>
<td>Arrow doesn’t specifically imply protection vs risk, just causal effect. <br /><br /> <strong>Unconditional exchangeability</strong> assumption means that association implies causation and vice versa.</td>
<td>I.70</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_1.png" width="175" /></td>
<td style="text-align: center"><em>Conditionally randomized experiment</em> <br /><br /> <strong>L</strong>: Stratification Variable <br /><br /> <strong>A</strong>: Treatment <br /><br /> <strong>Y</strong>: Outcome</td>
<td>Also equivalent to an <em>Observational Study</em> that assumes A depends on L and on <em>no other causes of Y</em> (else they’d need to be added). <br /><br /> Implies <strong>conditional exchangeability</strong>.</td>
<td>I.69-I.70</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_5.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Aspirin <br /><br /> <strong>B</strong>: Platelet aggregation <br /><br /> <strong>Y</strong>: Heart Disease</td>
<td>\(B\) is a <strong>mediator</strong> of \(A\)’s effect on \(Y\), but conditioning on \(B\) (e.g by restricting the analysis to people with a specific lab value) blocks the flow of association through the path A \(\rightarrow\) B \(\rightarrow\) Y. <br /><br /> Even though \(A\) and \(Y\) are <em>marginally</em> associated, they are <strong>conditionally independent</strong> given \(B\). In other words, A \(\unicode{x2AEB}\) Y | B. <br /><br /> Thus, knowing aspirin status gives you no more information once platelets are measured, at least according to this graph.</td>
<td>I.73</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_3.png" width="175" /></td>
<td style="text-align: center"><strong>L</strong>: Smoking status <br /><br /> <strong>A</strong>: Carrying a lighter <br /><br /> <strong>Y</strong>: Lung cancer</td>
<td>Graph says that carrying a lighter (A) has no causal effect on outcome (Y). <br /> Math form of this assumption is: Pr[Y^(a=1)=1]=Pr[Y^(a=0)=1] <br /><br /> However, \(A\) will be spuriously associated with \(Y\), because path <br /> A \(\leftarrow\) L \(\rightarrow\) Y is open to flow from A to Y: they share a common cause.</td>
<td>I.72</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_6.png" width="175" /></td>
<td style="text-align: center"><strong>L</strong>: Smoking status <br /><br /> <strong>A</strong>: Carrying a lighter <br /><br /> <strong>Y</strong>: Lung cancer</td>
<td>A \(\unicode{x2AEB}\) Y | L, because the path A \(\leftarrow\) L \(\rightarrow\) Y is closed by conditioning on L. <br /><br /> Thus, restricting the analysis to either smokers or non-smokers (box around L) means that lighter carrying will no longer be associated with lung cancer.</td>
<td>I.74</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_4.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Genetic predisposition for heart disease <br /><br /> <strong>Y</strong>: Smoking status <br /><br /> <strong>L</strong>: Heart disease</td>
<td>\(A\) and \(Y\) are not marginally associated, because they share no common causes. (i.e. Genetic risk for heart disease says nothing, in a vaccuum, about smoking status.) <br /><br /> \(L\) here is a <strong>collider</strong> on the path A \(\rightarrow\) L \(\leftarrow\) Y, because the two arrows collide on this node. But there is no causal path from \(A\) to \(Y\).</td>
<td>I.73</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_7.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Genetic predisposition for heart disease <br /><br /> <strong>Y</strong>: Smoking status <br /><br /> <strong>L</strong>: Heart disease</td>
<td><strong>Conditioning on the collider</strong> \(L\) opens the causal path A \(\rightarrow\) L \(\leftarrow\) Y. <br /><br /> Put another way, two causes of a given effect generally become associated once we stratify on the common effect. <br /><br /> In the example, knowing someone with heart disease lacks haplotype A makes it more likely that the individual is a smoker, because, in the absence of \(A\), it is more likely that some other cause of \(L\) is present. Or, conversely, the population of non-smokers with heart disease will be enriched for people with haplotype A. Thus, if one restricts the analysis to people with heart disease, he will find a spurious anti-correlation between the haplotype predictive of heart disease and smoking status.</td>
<td>I.74</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_8.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Genetic predisposition for heart disease <br /><br /> <strong>Y</strong>: Smoking status <br /><br /> <strong>L</strong>: Heart disease <br /><br /> <strong>C</strong>: Diuretic medication (given after heart disease diagnosis)</td>
<td>Conditioning on variable \(C\) <strong>downstream from collider</strong> \(L\) also opens up causal path A \(\rightarrow\) L \(\leftarrow\) Y. <br /><br /> Thus, in the example, stratifying on \(C\) (diuretic status) will induce a spurious relationship between \(A\) (genetic heart disease risk) and \(Y\) (smoking status).</td>
<td>I.75</td>
</tr>
<tr>
<td style="text-align: center"><em>Before matching</em>: <br /> <img src="/assets/hernan_dags/6_1.png" width="175" /><br /><br /> <em>After matching</em>:<br /> <img src="/assets/hernan_dags/6_9.png" width="175" /></td>
<td style="text-align: center"><em>Matched analysis</em> <br /><br /> <strong>L</strong>: Critical Condition <br /><br /> <strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Death <br /><br /> <strong>S</strong>: Selection for inclusion via matching criteria</td>
<td>In this study design, the average causal effect of \(A\) on \(Y\) is computed after matching on \(L\). <br /><br /> Before matching, \(L\) and \(A\) are associated via the path \(L \rightarrow A\) . <br /><br /> Matching is represented in the DAG through the addition of \(S\), the selection criteria. The study is obviously restricted to patients that are selected (\(S\)=1), hence we condition on \(S\). <br /><br /> d-separation rules say that there are now two open paths between \(A\) and \(L\) after conditioning on \(S\): \(L \rightarrow A\) and \(L \rightarrow S \leftarrow A\). This seems to indicate an association between \(L\) and \(A\). However, the point of matching is supposed to be to make sure that \(L\) and \(A\) not associated! <br /><br /> The resolution comes from the observation that \(S\) has been constructed specifically to induce the distribution of \(L\) to be the same in the treated (\(A\)=1) and untreated (\(A\)=0) population. This means that the association in \(L \rightarrow S \leftarrow A\) is of equal magnitude but opposite direction of \(L \rightarrow A\). Thus there is no net association between \(A\) and \(L\). <br /><br /> This disconnect between the associations visible in the DAG and the associations actually present is an example of <strong>unfaithfulness</strong>, but here it has been introduced by design.</td>
<td>I.49 and I.79</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_10.png" width="175" /></td>
<td style="text-align: center"><strong>R</strong>: Compound treatment (see right) <br /><br /> <strong>A</strong>: Vector of treatment versions \(A( r )\) (see right) <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>L</strong> and <strong>W</strong>: unnamed causes <br /><br /> <strong>U</strong>: unnmeasured variables</td>
<td>This is the example the book uses of how to encode compound treatments. <br /><br /> The example compound treatment is as follows: <br /><br /> R=0 corresponds to “exercising <em>less</em> than 30 minutes daily”. <br /> R=1 corresponds to “exercising <em>more</em> than 30 minutes daily.” <br /><br /> \(A\) is a vector corresponding to different <em>versions</em> of the treatment, where \(A(r=0)\) can take on values \(0,1,2,\dots, 29\) and \(A(r=1)\) can take on values \(30,31\dots, max\) <br /><br /> Taken together, we can have a mapping from multiple values \(A( r )\) onto a single value \(R=r\).</td>
<td>I.78</td>
</tr>
</tbody>
</table>
<h3 id="effect-modification-66">Effect Modification (6.6)</h3>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_11.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>M</strong>: Quality of Care. High (\(M=1\)) vs Low (\(M=0\))</td>
<td>This DAG reflects the assumption that quality of care influences quality of transplant procedure and thus of outcomes, BUT still assumes random assignment of treatment. <br /><br /> Given random assignment, \(M\) is not strictly necessary but added if you want to use it to stratify. <br /><br /> Causal diagram as such does not distinguish between: <br /> 1. Causal effect of treatment \(A\) on mortality \(Y\) is in the same direction in both stratum \(M=1\) and \(M=0\). <br /> 2. The causal effect of \(A\) on \(Y\) is in the opposite direction in \(M=1\) vs \(M=0\). <br /> 3. Treatment \(A\) as a causal effect on \(Y\) in one straum of \(M\) but no effect in other stratum.</td>
<td>I.80</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_12.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>M</strong>: Quality of Care. High (\(M=1\)) vs Low (\(M=0\)) <br /><br /> <strong>N</strong>: Therapy Complications</td>
<td>Same example as above, except assumes that other variables along the path of a modifier can also influence outcomes.</td>
<td>I.80</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_13.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>M</strong>: Quality of Care. High (\(M=1\)) vs Low (\(M=0\)) <br /><br /> <strong>S</strong>: Cost of treatment</td>
<td>Same example as above, except assumes that the quality of care effects the cost, but that the cost does not influence the outcome. <br /><br /> This is the example of an effect modifier that does not have a causal effect on the outcome, but rather stands as a <strong>surrogate effect modifier</strong>. <br /><br /> Analysis stratifying on \(S\) – which is available/objective – might be used to detect effect modification that actually comes from \(M\) but is harder to measure.</td>
<td>I.80</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_14.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>M</strong>: Quality of Care. High (\(M=1\)) vs Low (\(M=0\)) <br /><br /> <strong>U</strong>: Place of residence <br /><br /> <strong>P</strong>: Passport-defined nationality</td>
<td>Example where the surrogate effect modifier (passport) is not driven by the causal effect modifier (quality of care), but rather both are driven by a common cause (place of residence).</td>
<td>I.80</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/6_15.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: Outcome <br /><br /> <strong>M</strong>: Quality of Care. High (\(M=1\)) vs Low (\(M=0\)) <br /><br /> <strong>S</strong>: Cost of Care <br /><br /> <strong>W</strong>: Use of mineral water vs tap</td>
<td>Example where the surrogate effect modifier (cost) is influenced by <em>both</em> the causal effect modifier (quality) and something spurious. <br /><br /> If the study were restricted to low-cost hospitals by conditioning on \(S=0\), then use of mineral water would become associated with medical care \(M\) and would behave as a surrogate effect modifier. <br /><br /> Addendum: How? One example might be that conditioned on a low cost, a zero sum situation may arise in which spending more on fancy water means less is being spent on quality care, which could yield an inverse correlation between mineral water and medical quality.</td>
<td>I.81</td>
</tr>
</tbody>
</table>
<h3 id="confounding-chapter-7">Confounding (Chapter 7)</h3>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_1.png" width="175" /></td>
<td style="text-align: center"><strong>L</strong>: Being physicially fit <br /><br /> <strong>A</strong>: Working as a firefighter <br /><br /> <strong>Y</strong>: Mortality</td>
<td>The path \(A \rightarrow Y\) is a causal path from \(A\) to \(Y\). <br /><br /> \(A \leftarrow L \rightarrow Y\) is a <strong>backdoor path</strong> between \(A\) and \(Y\), mediated by common cause (<strong>confounder</strong>) \(L\). <br /><br /> Conditioning on \(L\) will block the backdoor path, induce conditional exchangeability, and allow for causal inference. <br /><br /> <em>Note</em>: This is an example of “healthy worker bias.”</td>
<td>I.83</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_2.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Aspirin <br /><br /> <strong>Y</strong>: Stroke <br /><br /> <strong>L</strong>: Heart Disease <br /><br /> <strong>U</strong>: Atherosclerosis (unmeasured)</td>
<td>This DAG is an example of <strong>confounding by indication</strong> (or <strong>channeling</strong>). <br /><br /> Aspirin will have a confounded association with stroke, both from heart disease (\(L \rightarrow A \rightarrow Y\)), and from atherosclerosis (\(U \rightarrow L \rightarrow A \rightarrow Y\)). <br /><br /> Conditioning on unmeasured \(U\) is impossible, but there is <em>no unmeasured confounding</em> given \(L\), so conditioning on \(L\) is sufficient.</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_3.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Exercise <br /><br /> <strong>Y</strong>: Death <br /><br /> <strong>L</strong>: Smoking status <br /><br /> <strong>U</strong>: Social Factors (unmeasured) or Sublinical Disease (undetected)</td>
<td>Conditioning on \(L\) is again sufficient to block the backdoor path in this case.</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_4.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Physical activity <br /><br /> <strong>Y</strong>: Cervical Cancer <br /><br /> <strong>L</strong>: Pap smear <br /><br /> <strong>U_1</strong>: Pre-cancer lesion (unmeasured here) <br /><br /> <strong>U_2</strong>: Health-conscious personality (unmeasured)</td>
<td>Example shows how <strong>conditioning on a collider can induce bias</strong>. <br /><br /> Adjustment for \(L\) (e.g. by restricting to negative tests \(L=0\)) will <em>induce</em> bias by opening a backdoor path between \(A\) and \(Y\) (\(A \leftarrow U_2 \rightarrow L \leftarrow U_1 \rightarrow Y\)), previously blocked by the collider. This is a case of <em>selection bias</em>. <br /><br /> Thus, after conditioning, association between \(A\) and \(Y\) would be a mixture of association due to effect of \(A\) on \(Y\) and backdoor path. In other words, there is no <em>unconditional bias</em>, but there would be a conditional bias for at least one stratum of \(L\).</td>
<td>I.88</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_5.png" width="175" /></td>
<td style="text-align: center">(Labels not in book)<br /><br /> <strong>A</strong>: Antacid <br /><br /> <strong>L</strong>: Heartburn <br /><br /> <strong>Y</strong>: Heart attack <br /><br /> <strong>U</strong>: Obesity</td>
<td>A nonconfounding example in which traditional analysis might lead you to adjust for \(L\), but doing so would <em>induce a bias</em>.</td>
<td>I.89</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_6.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Physical activity <br /><br /> <strong>L</strong>: Income <br /><br /> <strong>Y</strong>: Cardiovascular disease <br /><br /> <strong>U</strong>: Socioeconomic status</td>
<td>\(L\) (income) is <em>not</em> a confounder, but is a measurable variable that could serve as a <strong>surrogate confounder</strong> for \(U\) (socioeconomic status) and thus could be used to partially adjust for the confounding from \(U\). <br /><br /> In other words, conditioning on \(L\) will result in a partial blockage of the backdoor path \(A \leftarrow U \rightarrow Y\).</td>
<td>I.90</td>
</tr>
<tr>
<td style="text-align: center"><em>Normal DAG</em>: <br /> <img src="/assets/hernan_dags/7_2.png" width="175" /> <br /><br /> <em>Corresponding SWIG</em>: <br /> <img src="/assets/hernan_dags/7_7.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Aspirin <br /><br /> <strong>Y</strong>: Stroke <br /><br /> <strong>L</strong>: Heart Disease <br /><br /> <strong>U</strong>: Atherosclerosis (unmeasured)</td>
<td>Represents data from a hypothetical intervention in which all individuals receive the same treatment level \(a\). <br /><br /> Treatment is split into two sides: <br /> (a) Left side encodes the values of treatment \(A\) that would have been observed in the absence of intervention (<em>the natural value of treatment</em>) <br /> (b) Right side encodes the treatment value under the intervention. <br /> \(A\) has no variable into \(a\) bc \(a\) is the same everywhere. <br /><br /> Conditional exchangeability \(Y^{a} \unicode{x2AEB} A | L\) holds because all paths between \(Y^{a}\) and \(A\) are blocked after conditioning on \(L\).</td>
<td>I.91</td>
</tr>
<tr>
<td style="text-align: center"><em>Normal DAG</em>: <br /> <img src="/assets/hernan_dags/7_4.png" width="175" /> <br /><br /> <em>Corresponding SWIG</em>: <br /> <img src="/assets/hernan_dags/7_8.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Physical activity <br /><br /> <strong>Y</strong>: Cervical Cancer <br /><br /> <strong>L</strong>: Pap smear <br /><br /> <strong>U_1</strong>: Pre-cancer lesion (unmeasured here) <br /><br /> <strong>U_2</strong>: Health-conscious personality (unmeasured)</td>
<td>Here, marginal exchangeability \(Y^{a} \unicode{x2AEB} A\) holds because, on the SWIG, all paths between \(Y^{a}\) and \(A\) are blocked without conditioning on \(L\). <br /><br /> Conditional exchangeability \(Y^{a} \unicode{x2AEB} A | L\) does not hold because, on the SWIG, the path \(Y^{a} \leftarrow U_1 \rightarrow L \leftarrow U_2 \rightarrow A\) is open when the collider \(L\) is conditioned on. <br /><br /> Taken together, marginal \(A-Y\) association is causal but conidtional association \(A-Y\) given \(L\) is not.</td>
<td>I.91</td>
</tr>
<tr>
<td style="text-align: center"><em>Normal DAG</em>: <br /> <img src="/assets/hernan_dags/7_9.png" width="175" /> <br /><br /> <em>Corresponding SWIG</em>: <br /> <img src="/assets/hernan_dags/7_10.png" width="175" /></td>
<td style="text-align: center">(Example labels not in book)<br /><br /> <strong>A</strong>: Statins <br /><br /> <strong>Y</strong>: Coronary artery disease <br /><br /> <strong>L</strong>: HDL/LDL <br /><br /> <strong>U</strong>: Race</td>
<td>In this example, the SWIG is used to highlight a failure of the DAG to provide conditional exchangeability \(Y^{a} \unicode{x2AEB} A | L\). <br /><br /> In the SWIG, the factual variable \(L\) is replaced by the counterfactual variable \(L^{a}\). In this SWIG, counterfactual exchangeability \(Y^{a} \unicode{x2AEB} A | L_{a}\) holds, since \(L^{a}\) blocks the paths from \(Y^{a}\) to \(A\). But \(L\) is not even on the graph, so we can’t conclude \(Y^{a} \unicode{x2AEB} A | L\) holds. <br /><br /> The problem being highlighted here is that \(L\) is a <em>descendent</em> of the treatment \(A\) blocking the path to \(Y\). <br /><br /> In contrast, if the arrow from \(A\) to \(L\) didn’t exist, \(L\) would not be a descendent of \(A\) and adjusting for \(L\) <em>would</em> eliminate all bias, even if \(L\) were still in the future of \(A\). Thus, confounders are allowed to be in the future of the treatment, they just can’t be descendents.</td>
<td>I.92</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_11.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Aspirin <br /><br /> <strong>Y</strong>: Blood Pressure <br /><br /> <strong>U</strong>: History of heart disease (unmeasured) <br /><br /> <strong>C</strong>: Blood pressure right before treatment (“placebo test” aka “negative outcome control”)</td>
<td>This example was used to show <strong>difference-in-difference</strong> and <strong>negative outcome controls</strong>. <br /><br /> The idea: We cannot compute the effect of \(A\) on \(Y\) via standardization or IP weighting because there is unmeasured confounding. Instead, we first measure the (“negative”) outcome \(C\) right before treatment. Obviously \(A\) has no effect on \(C\), but we can assume that \(U\) will have the same confounding effect on \(C\) that it has on \(Y\). <br /><br /> As such, we take the effect in the treated to be the effect of \(A\) on \(Y\) (treatment effect + confounding effect) <em>minus</em> the effect of \(A\) on \(C\) (confounding effect). This is the difference-in-differences. <br /><br /> Negative outcome controls are sometimes used to try to <em>detect</em> confounding.</td>
<td>I.95</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_12.png" width="175" /></td>
<td style="text-align: center">(No example labels in text) <br /><br /> <strong>A</strong>: Aspirin <br /><br /> <strong>M</strong>: Platelet Aggregation <br /><br /> <strong>Y</strong>: Heart Attack <br /><br /> <strong>U</strong>: High Cardiovascular Risk</td>
<td>This example is to demonstrate the <strong>frontdoor criterion</strong> (see notes or page I.96 for more details). <br /><br /> Given this DAG, it is impossible to directly use standardization or IP weighting, because the unmeasured variable \(U\) is necessary to block the backdoor path between \(A\) and \(Y\). <br /><br /> However, the frontdoor adjustment can be used because: <br /> (i) the effect of \(A\) on \(<\) can be computed without confounding, and <br /> (ii) the effect of \(M\) on \(Y\) can be computed because \(A\) blocks only the backdoor path. <br /><br /> Hence, <strong>frontdoor adjustment</strong> can be used.</td>
<td>I.95</td>
</tr>
</tbody>
</table>
<p><strong>Some additional (but structurally redundant) examples of confounding from chapter 7</strong>:</p>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_3.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Exercise <br /><br /> <strong>Y</strong>: Death <br /><br /> <strong>L</strong>: Smoking status <br /><br /> <strong>U</strong>: Social Factors (unmeasured) or Sublinical Disease (undetected)</td>
<td><em>Subclinical disease</em> could also result both in lack of exercise \(A\) and increased risk of a clinical diseae \(Y\). This is an example of <strong>reverse causation</strong>.</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_3.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Gene being tested <br /><br /> <strong>Y</strong>: Trait <br /><br /> <strong>L</strong>: Different gene in LD with gene A <br /><br /> <strong>U</strong>: Ethnicity</td>
<td><em>Linkage disequilibrium</em> can drive spurious associations between gene \(A\) and trait \(Y\) if the true causal gene \(L\) is in LD with \(A\) in patients with ethnicity \(U\).</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/7_3.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Airborne particulate matter <br /><br /> <strong>Y</strong>: Coronary artery disease <br /><br /> <strong>L</strong>: Other pollutants <br /><br /> <strong>U</strong>: Weather conditions</td>
<td><em>Environmental exposures</em> often co-vary with the weather conditions. As such, certain pollutants \(A\) may be spuriously associated with outcome \(Y\) simply because the weather drives them to co-occur with \(L\).</td>
<td>I.84</td>
</tr>
</tbody>
</table>
<h3 id="selection-bias-chapter-8">Selection Bias (Chapter 8)</h3>
<p><strong>Note</strong>: While randomization eliminates <em>confounding</em>, it does not eliminate <em>selection bias</em>. All of the issues in this section apply just as much to prospective and/or randomized trials as they do to observational studies.</p>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_1.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Folic Acid supplements <br /><br /> <strong>Y</strong>: Cardiac Malformation <br /><br /> <strong>C</strong>: Death before birth</td>
<td>In this example, we assume folic acid supplements <em>decrease</em> mortality by reducing non-cardiac malformations, cardiac malformatins <em>increase</em> mortality, and cardiac malformations <em>increase</em> mortality. <br /><br /> Study restricted participants to fetuses who survived until birth (\(C=0\)). <br /><br /> Two sources of association between treatment and outcome: <br /> 1. Open path \(A \rightarrow Y\), the causal effect. <br /> 2. Open path \(A \rightarrow C \leftarrow Y\) linking \(A\) and \(Y\) due to <strong>conditioning on common effect (collider)</strong> \(C\). This is the <strong>selection bias</strong>, specifically, <strong>selection bias under the null</strong>. <br /><br /> The selection bias eliminates ability to make causal inference. If analysis were not conditioned on \(C\), causal inference would be valid.</td>
<td>I.97</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_2.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Folic Acid supplements <br /><br /> <strong>Y</strong>: Cardiac Malformation <br /><br /> <strong>C</strong>: Death before birth <br /><br /> <strong>S</strong>: Parental Grief</td>
<td>This example is the same as the above, except we consider if the researchers instead conditioned on the <strong>effect of the collider</strong>, namely \(S\), parental grief. <br /><br /> This is <em>still</em> <strong>selection bias</strong>, \(A \rightarrow C \leftarrow Y\) linking \(A\) is open, and association is not causation.</td>
<td>I.98</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_3.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_5.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Antiretroviral treatment <br /><br /> <strong>Y</strong>: 3-year death <br /><br /> <strong>C</strong>: Censoring from study or Missing Data <br /><br /> <strong>U</strong>: High immunosuppresion (unmeasured) <br /><br /> <strong>L</strong>: Symptoms, CD4 count, viral load (unmeasured) <br /><br /> ———— <br /><br /> <strong>W</strong>: Lifestyle, personality, educational variables (unmeasured)</td>
<td>Figure 8.3:<br /> In this example, individuals with high immunosuppresion – in addition to having higher risk of death – manifest worse physical symptoms that mediates censoring from the study. Treatment also worsens side effects, which increases censoring, as well. <br /><br /> \(C\) is conditioned upon, because those are the only ones who actually contribute data to the study. <br /><br /> Per d-separation, \(A \rightarrow C \leftarrow L \leftarrow U \rightarrow Y\) is open due to conditioning on \(C\), allowing association to flow from \(A\) to \(Y\) and killing causal inference. <br /><br /> Note: This is a transformation of figure 8.1, except instead of \(Y\) acting directly on \(C\), we have \(U\) acting on both \(Y\) and \(C\). <br /><br /> Intuition for the bias: if a treated individual with treatment-induced side effects does not drop out (\(C=0\)), this implies that he probably didn’t have high immunosuppresion \(U\), and low immunosuppresion means better outcomes. Hence, there is probably an inverse association between \(A\) and \(U\) among those that don’t drop out. <br /><br /> This is an example of <strong>selection bias</strong> that arises from <strong>conditioning on a censoring variable that is a comon effect of both treatment \(A\) and cause \(U\) of the outcome \(Y\).</strong> <br /><br /> ———— <br /><br /> Figure 8.5 is the same idea, except it notes that sometimes additional unmeasured variables may contribute to both treatment and censoring.</td>
<td>I.98</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_4.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_6.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Antiretroviral treatment <br /><br /> <strong>Y</strong>: 3-year death <br /><br /> <strong>C</strong>: Censoring from study or Missing Data <br /><br /> <strong>U</strong>: High immunosuppresion (unmeasured) <br /><br /> <strong>L</strong>: Symptoms, CD4 count, viral load (unmeasured) <br /><br /> ———— <br /><br /> <strong>W</strong>: Lifestyle, personality, educational variables (unmeasured)</td>
<td>Same example as 8.3/8.5, except we assume that treatment (especially prior treatment) has direct effect on symptoms \(L\). <br /><br /> Restricting to uncensored individuals still implies conditioning on a <strong>common effect</strong> \(C\) of both \(A\) and \(U\), introducing an association between treatment and outcome. <br /><br /> (<strong>Note</strong>: Unlike in Figure 8.3/8.5, even if we had access to \(L\), stratification is impossible in these DAGs, because while conditioning on \(L\) blocks the backdoor path from \(C\) to \(Y\), it also opens the backdoor path \(A \rightarrow L \leftarrow U \rightarrow Y\) because \(L\) is a collider on that path. IP-weighting, in contrast, could work here. See page I.108 in section 8.5 for a discussion.)</td>
<td>I.98</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_7.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Physical activity <br /><br /> <strong>Y</strong>: Heart Disease <br /><br /> <strong>C</strong>: Becoming a firefighter <br /><br /> <strong>L</strong>: Parental socioeconomic status <br /><br /> <strong>U</strong>: Interest in physical activites (unmeasured)</td>
<td>The goal of this example is to show that while <em>confounding</em> and <em>selection bias</em> are distinct, they can often become functionally the same; this is why some call selection bias “confounding”. <br /><br /> Assume that – unknown to the investigators – \(A\) does not cause \(Y\). Parental SES \(L\) affects becoming a firefighter \(C\), and, through childhood diet, heart disease risk \(Y\). But we assume that \(L\) doesn’t affect \(A\). Attraction to physical activity \(U\) affects being physically active \(A\) and being a firefighter \(C\), but not \(Y\). <br /><br /> Per these assumptions, there is <em>no confounding</em>, bc no common causes of \(A\) and \(Y\). However, restricting the study to firefighters (\(C=0\)), induces a <em>selection bias</em> that can be eliminated by adjusting for \(L\). Thus, some economists would call \(L\) a “confounder” because adjusting for it eliminates the bias.</td>
<td>I.101</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_8.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_9.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Heart Transplant <br /><br /> <strong>Y_1</strong>: Death at time point 1 <br /><br /> <strong>Y_2</strong>: Death at time point 2 <br /><br /> <strong>U</strong>: Protective genetic haplotype (unmeasured)</td>
<td>The purpose of this example is to show the potential for selection bias in <em>time-specific hazard ratios</em>. <br /><br /> The example depicts a <em>randomized experiment</em> representing the effect of heart transplant on risk of death at two time points, for which we assume the true causal DAG is figure 8.8. <br /><br /> In figure 8.8, we assume that \(A\) only directly affects death at the first time point and that \(U\) decreases risk of death at all times but doesn’t affect treatment. <br /><br /> In this circumstance, the unconditional associated risk ratios are not confounded. In other words, \(aRR_{AY_1} = \frac{[Y_{1}|A=1]}{[Y_{1}|A=0]}\) and \(aRR_{AY_2} = \frac{[Y_{2}|A=1]}{[Y_{2}|A=0]}\) are unbiased and valid for causal inference. <br /><br /> However, trying to compute time-specific hazard ratios is risky. The process is valid at time point 1 (\(aRR_{AY_1}\) is the same as above), but the hazard ratio at time point 2 is inherently conditional on having survived at time point 1: \(aRR_{AY_2|Y_{1}=0} = \frac{[Y_{2}|A=1,Y_{1}=0]}{[Y_{2}|A=0,Y_{1}=0]}\). Since \(U\) affects survival at time point 1, however, this induces a selection bias that opens a path \(A\rightarrow Y_{1} \leftarrow U \rightarrow Y_{2}\) beteween \(A\) and \(Y_2\). <br /><br /> If we could condition on \(U\), then \(aRR_{AY_2|Y_{1}=0,U}\) would be valid for causal inference. But we can’t, so conditioning on \(Y_{1}=0\) makes the DAG functionally equivalent to Figure 8.9. This issue is relevant to observational and randomized experiments over time.</td>
<td>I.102</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_3.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Wasabi consumption (randomized) <br /><br /> <strong>Y</strong>: 1-year death <br /><br /> <strong>C</strong>: Censoring <br /><br /> <strong>L</strong>: Heart Disease <br /><br /> <strong>U</strong>: Atherosclerosis (unmeasured)</td>
<td>This example is of an <strong>RCT with censoring</strong>. We imagine that there was in reality an equal number of deaths in treatment and control, but there was higher censoring (\(C=1\)) among patients with heart disease and higher censoring among the wasabi arm. As such, we observe more deaths in the wasabi group than in control. Thus, we see a <strong>selection bias</strong> due to <strong>conditioning on common effect</strong> \(C\). <br /><br /> There are <em>no common causes</em> of \(A\) and \(Y\) – expected in a marginally randomized experiment – so there is no need to adjust for <em>confounding</em> per se. However, there is a common cause \(U\) of both \(C\) and \(Y\), inducing a backdoor path \(C \leftarrow L \leftarrow U \rightarrow Y\). As such, conditioning on non-censored patients \(C=0\) means we have a selection bias that turns \(U\) functionally into a confounder. \(U\) is unmeasured, but the backdoor criterion says that <strong>adjusting for \(L\)</strong> here blocks the backdoor path. <br /><br /> The takeaway here is that <strong>censoring or other selection changes the causal question</strong>, and turns the counterfactual outcome into \(Y^{a=1,c=0}\) – the outcome of receiving the treatment <em>and</em> being uncensored. The relevant causal risk ratio, for example, is thus now \(\frac{E[Y^{a=1,c=0}]}{E[Y^{a=0,c=0}]}\) – “the risk if everyone had been treated and was uncensored” vs “the risk if everyone were untreated and remained uncensored.” In this sense, <strong>censoring is another treatment</strong>.</td>
<td>I.105</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_12.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_13.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Surgery <br /><br /> <strong>Y</strong>: Death <br /><br /> <strong>E</strong>: Genetic hapltype <br /><br /> ———— <br /><br /> <em>Death subsplit by causes</em>: (not recorded) <br /><br /> <strong>Y_A</strong>: Death from tumor <br /><br /> <strong>Y_E</strong>: Death from heart attack <br /><br /> <strong>Y_A</strong>: Death from other causes</td>
<td>In this example, Figure 8.12, surgery \(A\) and haplotype \(E\) are: <br /> (i) marginally independent (i.e. haplotype doesn’t affect probability of receiving surgery), and <br /> (ii) associated conditionally on \(Y\) (i.e. probability of receiving surgery <em>does</em> vary by haplotype within at least one stratum of the haplotype). <br /><br /> The purpose of this example is to show that despite this fact, situations exist in which \(A\) and \(E\) remain conditionally independent within <em>some</em> haplotypes. <br /><br /> Key idea here is that to recognize that if you split death into different causes (even if this isn’t recorded), \(A\) and \(E\) affect different sub-causes in different ways (specifically, \(A\) removes tumor, and \(E\) prevents heart attack). <br /><br /> Arrows from \(Y_{A}\), \(Y_{E}\), and \(Y_{O}\) to \(Y\) are deterministic, and \(Y=0\) if and only if \(Y_{A} = Y_{E}=Y_{O}=0\), so conditioning on \(Y_{0}=0\) implicitly conditions the other \(Y\)s to zero. This also blocks the path between \(A\) and \(E\), since it is conditioning on non-colliders \(Y_{A}\), \(Y_{E}\), and \(Y_{O}\). <br /><br /> In contrast, conditioning on \(Y=1\) is compatible with any combination of \(Y_{A}\), \(Y_{E}\), and \(Y_{O}\) being equal to 1, so the path between \(A\) and \(E\) is not blocked. <br /><br /> The ability to break the conditional probability of survival down in this way is an example of a <strong>multiplicative survival model</strong>.</td>
<td>I.105</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_14.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_15.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_16.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Surgery <br /><br /> <strong>Y</strong>: Death <br /><br /> <strong>E</strong>: Genetic hapltype <br /><br /> ———— <br /><br /> <em>Death subsplit by causes</em>: (not recorded) <br /><br /> <strong>Y_A</strong>: Death from tumor <br /><br /> <strong>Y_E</strong>: Death from heart attack <br /><br /> <strong>Y_A</strong>: Death from other causes</td>
<td>Same setup as in the examples of Figure 8.12 and 8.13. However, in all of these DAGs, \(A\) and \(E\) affect survival thrugh a common mechanism, either directly or indirectly. In such cases, \(A\) and \(E\) are dependent in <em>both</em> strata of \(Y\). <br /><br /> Taken together with the example above, the point is that <strong>conditioning on a collider <em>always</em> induces an association between its causes, but that this association <em>may</em> or <em>may not</em> be restricted to certain levels of the common effect</strong>.</td>
<td>I.105</td>
</tr>
</tbody>
</table>
<p><strong>Some additional (but structurally redundant) examples of selection bias from chapter 8</strong>:</p>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_4.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_6.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Occupational exposure <br /><br /> <strong>Y</strong>: Mortality <br /><br /> <strong>C</strong>: Being at Work <br /><br /> <strong>U</strong>: True health status <br /><br /> <strong>L</strong>: Blood tests and physical exam ———— <br /><br /> <strong>W</strong>: Exposed jobs are eliminated and workers laid off</td>
<td>(<em>Note</em>: DAGS 8.3/8.5 work just as well, here.) <br /><br /> <strong>Healthy worker bias</strong>: <br /> If we restrict a factory cohort study to those individuals who are actually at work, we miss out on the people that are not working due to either: <br /> (a) disability caused by exposure, or <br /> (b) a common cause of not working and not being exposed.</td>
<td>I.99</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_3.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_5.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Smoking status <br /><br /> <strong>Y</strong>: Coronary heart disease <br /><br /> <strong>C</strong>: Consent to participate <br /><br /> <strong>U</strong>: Family history <br /><br /> <strong>L</strong>: Heart disease awareness ———— <br /><br /> <strong>W</strong>: Lifestyle</td>
<td>(<em>Note</em>: DAGS 8.4/8.6 work just as well, here.) <br /><br /> <strong>Self-selection bias or Volunteer bias</strong>: <br /> Under any of the above structures, if the study is restricted to people who volunteer or choose to participate, this can induce a selection bias.</td>
<td>I.100</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/8_3.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/8_5.png" width="175" /></td>
<td style="text-align: center">(<em>Note: Missing arrow: \(A \rightarrow Y\)</em> ) <br /><br /> <strong>A</strong>: Smoking status <br /><br /> <strong>Y</strong>: Coronary heart disease <br /><br /> <strong>C</strong>: Consent to participate <br /><br /> <strong>U</strong>: Family history <br /><br /> <strong>L</strong>: Heart disease awareness ———— <br /><br /> <strong>W</strong>: Lifestyle</td>
<td>(<em>Note</em>: DAGS 8.4/8.6 work just as well, here.) <br /><br /> <strong>Selection affected by treatment received before study entry</strong>: <br /> Generalization of self-selection bias. Under any of the above structures, if the treatment takes place before the study selection or includes a pre-study component, a selection bias can arise. <br /><br /> Particularly high-risk in studies that look at lifetime exposure to something in middle-aged volunteers. <br /><br /> Similar issues often arise with confounding if confounders are only measured during the study.</td>
<td>I.100</td>
</tr>
</tbody>
</table>
<h3 id="measurement-bias-chapter-9">Measurement Bias (Chapter 9)</h3>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_1.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: True Treatment <br /><br /> <strong>A*</strong>: Measured treatment <br /><br /> <strong>Y</strong>: True Outcome <br /><br /> <strong>U_A</strong>: Measurement error</td>
<td>This DAG is simply to demonstrate how the <strong>measured treatment</strong> \(A^{*}\) (aka “measure” or “indicator”) recorded in the study is different from the <em>true</em> treatment (aka “construct”). It also introduces \(U_{A}\), the <strong>measurement error</strong> variable, which encodes all the factors other than \(A\) that determine \(A^{*}\) <br /><br /> <em>Note</em>: \(U_{A}\) and \(A^{*}\) were unnecessary in discussions of <em>confounding</em> or <em>selection bias</em> because they are not a part of a backdoor path and no variables are conditioned on them.</td>
<td>I.111</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_2.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: True Treatment <br /><br /> <strong>A*</strong>: Measured treatment <br /><br /> <strong>Y</strong>: True Outcome <br /><br /> <strong>Y*</strong>: Measured outcome <br /><br /> <strong>U_A</strong>: Measurement error for A <br /><br /> <strong>U_Y</strong>: Measurement error for Y</td>
<td>This DAG adds in the notion of imperfect measurement for the outcome as well as the treatment. <br /><br /> Note that there is still no <em>confounding</em> or <em>selection bias</em> at play here, so <strong>measurement bias</strong> or <strong>information bias</strong> is the only thing that would break the link between association and causation. <br /><br /> Figure 9.2 is an example of a DAG with <strong>independent nondifferential error</strong>.</td>
<td>I.112</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_3.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Drug use <br /><br /> <strong>A*</strong>: Recorded history of drug use <br /><br /> <strong>Y</strong>: Liver toxicity <br /><br /> <strong>Y*</strong>: Liver lab values <br /><br /> <strong>U_A</strong>: Measurement error for A <br /><br /> <strong>U_Y</strong>: Measurement error for Y <br /><br /> <strong>U_AY</strong>: Measurement error affecting A and Y (e.g memory and language gaps during interview)</td>
<td>In Figure 9.2 above, \(U_{A}\) and \(U_{Y}\) are independent according to d-separation, because the path between them is blocked by colliders. Independent errors could include EHR data entry errors that occur by chance, technical errors at a lab, etc. <br /><br /> In this figure, we add \(U_{AY}\) to note the existence of <em>dependent</em> errors. For example, communication errors that take place during an interview with a patient could effect both recorded drug use and previous recorded lab tests. <br /><br /> Figure 9.3 is an example of <strong>dependent nondifferential error</strong>.</td>
<td>I.112</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_4.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/9_6.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Drug use <br /><br /> <strong>A*</strong>: History of drug use per patient interview <br /><br /> <strong>Y</strong>: Dementia <br /><br /> <strong>Y*</strong>: Dementia diagnosis <br /><br /> <strong>U_A</strong>: Measurement error for A <br /><br /> <strong>U_Y</strong>: Measurement error for Y <br /><strong>__</strong><strong>__ <br /> __U_AY</strong>: Measurement error affecting A and Y</td>
<td><strong>Recall bias</strong> is one example of how the <em>true outcome</em> can bias <em>treatment measurement error</em>. <br /><br /> In this example, patients with dementia are less able to effectively communicate, so true cases of the disease are more likely to have faulty medical histories. <br /><br /> Another example of <em>recall bias</em> could be in a study of the effect of alcohol use during pregancy \(A\) on birth defects \(Y\), if the alcohol intake is measured by recall after delivery. Bad medical outcomes, especially ones like complicated births, often affect patient recall and patient reporting. <br /><br /> Figure 9.4 is an example of <strong>independent differential measurement error</strong>. <br /><br /> Adding dependent errors such as a faulty interview makes Figure 9.6 an example of <strong>dependent differential error</strong>.</td>
<td>I.113</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_5.png" width="175" /> <br /><br /><br /> <img src="/assets/hernan_dags/9_7.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Drug use <br /><br /> <strong>A*</strong>: Recorded history of drug use <br /><br /> <strong>Y</strong>: Liver toxicity <br /><br /> <strong>Y*</strong>: Liver lab values <br /><br /> <strong>U_A</strong>: Measurement error for A <br /><br /> <strong>U_Y</strong>: Measurement error for Y <br /><strong>__</strong><strong>__ <br /> __U_AY</strong>: Measurement error affecting A and Y</td>
<td>An example of <em>true treatment</em> affecting the <em>measurement error of the outcome</em> could also arise in the setting of drug use and liver toxicity. <br /><br /> For example, if a doctor finds out a patient has a drug problem, he may start monitoring the patients liver more frequently, and become more likely to catch aberrant liver lab values and record them in the EHR. <br /><br /> Figure 9.5 is an example of <strong>independent differential measurement error</strong>. <br /><br /> Adding dependent errors such as a faulty interview makes Figure 9.7 an example of <strong>dependent differential error</strong>.</td>
<td>I.113</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_8.png" width="175" /> <br /><br /> <img src="/assets/hernan_dags/7_5.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Drug use <br /><br /> <strong>Y</strong>: Liver toxicity <br /><br /> <strong>L</strong>: History of hepatitis <br /><br /> <strong>L*</strong>: Measured history of hepatitis</td>
<td>This example demonstrates <strong>mismeasured confounders</strong>. <br /><br /> Controlling for \(L\) in Figure 9.8 would be sufficient to allow for causal inference. However, if \(L\) is imperfectly measured – say, because it was retrospectively recorded from memory – then the standardized or IP-weighted risk ratio based on \(L^{*}\) will generally differ from the true causal risk ratio. <br /><br /> A cool observation is that since noisy measurement of confounding can be thought of as unmeasured confounding, Figure 9.9 is actually equivalent to Figure 7.5: \(L^{*}\) is essentially a <em>surrogate confounder</em> (like Figure 7.5’s \(L\)) for an unmeasured actual confounder (Figure 7.5’s \(U\) playing the role of Figure 9.9’s \(L\)). Hence, controlling for \(L^{*}\) will be better than nothing but still flawed.</td>
<td>I.114</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_9.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Aspirin <br /><br /> <strong>Y</strong>: Stroke <br /><br /> <strong>L</strong>: Heart Disease <br /><br /> <strong>U</strong>: Atherosclerosis (unmeasured) <br /><br /> <strong>L*</strong>: Measured history of heart disease</td>
<td>Figure 9.9 is the same idea as Figure 9.8: Even though controlling for \(L\) <em>would</em> be sufficient, a mismatched \(L^{*}\) is insufficient to block the backdoor path in general. <br /><br /> Another note here is that <strong>mismeasurement of confounders can result in apparent effect modification</strong>. For example, if all participants who reported a history of heart disease (\(L^{*}=1\)) and half the participants who reported no such history (\(L^{*}=0\)) actually had heart disease, then stratifying on (\(L^{*}=1\)) would eliminate all confounding in that stratum, but statifying on (\(L^{*}=0\)) would fail to do so. Thus one could detect a spurious assocation in (\(L^{*}=0\)) but not in (\(L^{*}=1\)) and falsely conclude that \(L^{*}\) is an effect modifier. (See discussion on I.115.)</td>
<td>I.114</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_10.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Folic Acid supplements <br /><br /> <strong>Y</strong>: Cardiac Malformation <br /><br /> <strong>C</strong>: Death before birth <br /><br /> <strong>C*</strong>: Death records</td>
<td><strong>Conditioning on a mismeasured collider induces a selection bias</strong>, because \(C^{*}\) is a common effect of treatment \(A\) and outcome \(Y\).</td>
<td>I.115</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_11.png" width="175" /> <br /><br /> <img src="/assets/hernan_dags/9_12.png" width="175" /></td>
<td style="text-align: center"><strong>Z</strong>: Assigned treatment <br /><br /> <strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: 5-year Mortality <br /><br /> (Ignore <strong>U</strong> here)</td>
<td>Figure 9.11 is an example of an <strong>intention-to-treat</strong> RCT. <br /><br /> ITT RCT’s can be almost thought of as an RCT with a potentially <em>misclassified treatment</em>. However, unlike a misclassifed treatment, the treatment assignment \(Z\) has a causal effect on the outcome \(Y\), both <br /> (a) by influencing the actual treatment \(A\), and <br />(b) by influencing study participants who know what \(Z\) is and change their behavior accordingly.<br /> Hence, the causal effect of \(Z\) on \(Y\) depends on the strength of the arrow \(Z \rightarrow Y\), the arrow \(Z \rightarrow A\), and the arrow \(A \rightarrow Y\). <br /><br /> Double-blinding attempts to remove \(Z \rightarrow Y\) (Figure 9.12).</td>
<td>I.115</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_11.png" width="175" /></td>
<td style="text-align: center"><strong>Z</strong>: Assigned treatment <br /><br /> <strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: 5-year Mortality <br /><br /> <strong>U</strong>: Illness Severity (unmeasured)</td>
<td>By including \(U\), we are considering the fact that in an IIT study, severe illness (or other variables) contribute to some patients to seek out different treatment than they’ve been assigned. <br /><br /> Note that there is a backdoor path \(A \leftarrow U \rightarrow Y\) and thus <strong>confounding for the effect of \(A\) on \(Y\)</strong>, requiring adjustment. <br /><br /> However, there is <strong>no confounding of \(Z\) and \(Y\)</strong>, and thus no need for adjustment. <br /><br /> This explains why the <strong>intention-to-treat effect</strong> is often estimated in lieu of the <strong>per-protocol effect</strong>. <br /><br /> Taken together, <strong><em>per-protocol effect</em> brings with it unmeasured confounding</strong>, and <strong><em>IIT</em> brings risk of misclassification bias</strong>. So one needs to trade these off when deciding which to use. (Full discussion below and on I.120)</td>
<td>I.115</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_13.png" width="175" /></td>
<td style="text-align: center"><strong>Z</strong>: Assigned treatment <br /><br /> <strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: 5-year Mortality <br /><br /> <strong>U</strong>: Illness Severity (unmeasured) <br /><br /> <strong>L</strong>: Measured factors that mediate U</td>
<td>This example is of a <strong>as-treated analysis</strong>, a type of per-protocol analysis <br /><br /> <em>As-treated</em> includes <em>all patients</em> and compares those treated (\(A=1\)) vs not treated (\(A=0\)), independent of their assignment \(Z\). <br /><br /> As-treated analyses are <em>confounded</em> by \(U\), and thus depend entirely on whether they can accurately adjust for measurable factors \(L\) to block the backdoor paths between \(A\) and \(Y\).</td>
<td>I.118</td>
</tr>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_14.png" width="175" /></td>
<td style="text-align: center"><strong>Z</strong>: Assigned treatment <br /><br /> <strong>A</strong>: Heart Transplant <br /><br /> <strong>Y</strong>: 5-year Mortality <br /><br /> <strong>U</strong>: Illness Severity (unmeasured) <br /><br /> <strong>L</strong>: Measured factors that mediate U <br /><br /> <strong>S</strong>: Selection filter (A=Z)</td>
<td>This example is of a <strong>conventional per-protocol analysis</strong>, a second method to measure per-protocol effect. <br /><br /> Conventional per-protocol analyses limit the population to those who adhered to the study protocol, subsetting to those for whom \(A=Z\). <br /><br /> This method induces a <em>selection bias</em> on \(A=Z\), and thus still requires adjustment on \(L\).</td>
<td>I.118</td>
</tr>
</tbody>
</table>
<p><strong>Some additional (but structurally redundant) examples of measurement bias from chapter 9:</strong></p>
<table>
<thead>
<tr>
<th style="text-align: center">DAG</th>
<th style="text-align: center">Example</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><img src="/assets/hernan_dags/9_4.png" width="175" /></td>
<td style="text-align: center"><strong>A</strong>: Drug use <br /><br /> <strong>A*</strong>: Recorded history of drug use <br /><br /> <strong>Y</strong>: Liver toxicity <br /><br /> <strong>Y*</strong>: Liver lab values <br /><br /> <strong>U_A</strong>: Measurement error for A <br /><br /> <strong>U_Y</strong>: Measurement error for Y</td>
<td><strong>Reverse causation bias</strong> is another example of how the <em>true outcome</em> can bias <em>treatment measurement error</em>. <br /><br /> In this example, liver toxicity worsens clearance of drugs from the body, which could affect blood levels of the drugs.</td>
<td>I.112</td>
</tr>
</tbody>
</table>
Wed, 19 Jun 2019 07:50:00 -0400
http://sgfin.github.io/2019/06/19/Causal-Inference-Book-All-DAGs/
http://sgfin.github.io/2019/06/19/Causal-Inference-Book-All-DAGs/Causal Inference Book Part I -- Glossary and Notes<p>This page contains some notes from Miguel Hernan and Jamie Robin’s <a href="https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/">Causal Inference Book</a>. So far, I’ve only done Part I.</p>
<p>This page only has key terms and concepts. On <a href="https://sgfin.github.io/2019/06/19/Causal-Inference-Book-All-DAGs/">this page</a>, I’ve tried to systematically present all the DAGs in the same book. I imagine that one will be more useful going forward, at least for me.</p>
<p><strong>Table of Contents</strong>:</p>
<ul id="markdown-toc">
<li><a href="#a-few-common-variables" id="markdown-toc-a-few-common-variables">A few common variables</a></li>
<li><a href="#chapter-1-definition-of-causal-effect" id="markdown-toc-chapter-1-definition-of-causal-effect">Chapter 1: Definition of Causal Effect</a></li>
<li><a href="#chapter-2-randomized-experiments" id="markdown-toc-chapter-2-randomized-experiments">Chapter 2: Randomized Experiments</a></li>
<li><a href="#chapter-3-observational-studies" id="markdown-toc-chapter-3-observational-studies">Chapter 3: Observational Studies</a></li>
<li><a href="#chapter-4-effect-modification" id="markdown-toc-chapter-4-effect-modification">Chapter 4: Effect Modification</a></li>
<li><a href="#chapter-5-interaction" id="markdown-toc-chapter-5-interaction">Chapter 5: Interaction</a></li>
<li><a href="#chapter-6--causal-diagrams" id="markdown-toc-chapter-6--causal-diagrams">Chapter 6: Causal Diagrams</a></li>
<li><a href="#chapter-7--confounding" id="markdown-toc-chapter-7--confounding">Chapter 7: Confounding</a></li>
<li><a href="#chapter-8--selection-bias" id="markdown-toc-chapter-8--selection-bias">Chapter 8: Selection Bias</a></li>
<li><a href="#chapter-9--measurement-bias" id="markdown-toc-chapter-9--measurement-bias">Chapter 9: Measurement Bias</a></li>
<li><a href="#chapter-10--random-variability" id="markdown-toc-chapter-10--random-variability">Chapter 10: Random Variability</a></li>
<li><a href="#chapter-11-why-model" id="markdown-toc-chapter-11-why-model">Chapter 11: Why Model?</a></li>
<li><a href="#chapter-12" id="markdown-toc-chapter-12">Chapter 12:</a></li>
</ul>
<h3 id="a-few-common-variables">A few common variables</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Variable</th>
<th style="text-align: center">Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><em>A</em>, <em>E</em></td>
<td style="text-align: center">Treatment</td>
</tr>
<tr>
<td style="text-align: center"><em>Y</em></td>
<td style="text-align: center">Outcome</td>
</tr>
<tr>
<td style="text-align: center"><em>Y^(A=a)</em></td>
<td style="text-align: center">Counterfactual outcome under treatment with \(a\)</td>
</tr>
<tr>
<td style="text-align: center"><em>Y^(a,e)</em></td>
<td style="text-align: center">Joint counterfactual outcome under treatment with \(a\) and \(e\)</td>
</tr>
<tr>
<td style="text-align: center"><em>L</em></td>
<td style="text-align: center">Patient variable (often confounder)</td>
</tr>
<tr>
<td style="text-align: center"><em>U</em></td>
<td style="text-align: center">Patient variable (often unmeasured or background variable)</td>
</tr>
<tr>
<td style="text-align: center"><em>M</em></td>
<td style="text-align: center">Patient variable (often effect modifier)</td>
</tr>
</tbody>
</table>
<h3 id="chapter-1-definition-of-causal-effect">Chapter 1: Definition of Causal Effect</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Notation or Formula</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Association</strong></td>
<td style="text-align: center">Pr[Y=1|A=1] \(\neq\) Pr[Y=1|A=0]</td>
<td><em>Example definitions of independence (lack of association)</em>: <br /> Y \(\unicode{x2AEB}\) A <br /> or <br /> Pr[Y=1|A=1] - Pr[Y=1|A=0] = 0 <br /> or <br /> \(\frac{Pr[Y=1|A=1]}{Pr[Y=1|A=0]}\) = 1 <br /> or <br /> \(\frac{Pr[Y=1|A=1]/Pr[Y=0|A=1]}{Pr[Y=1|A=0]/Pr[Y=0|A=0]}\) = 1</td>
<td>I.10</td>
</tr>
<tr>
<td style="text-align: center"><strong>Causation and Causal Effects</strong></td>
<td style="text-align: center"><em>Causation</em>:<br />Pr[Y^(a=1)=1] \(\neq\) Pr[Y^(a=0)=1] <br /><br /> <em>Individual Causal Effects</em>:<br /> Y^(a=1) - Y^(a=0) <br /><br /> <em>Population Average Causal Effects</em>:<br /> E[Y^(a=1)] - E[Y^(a=0)] <br /><br /> <em>where</em> <br />Y^(a=1) = Outcome for treatment w/ \(a=1\) <br /> Y^(a=0) = Outcome for treatment w/ \(a=0\)</td>
<td><em>Sharp causal null hypothesis</em>: <br />Y^(a=1) = Y^(a=0) for all individuals in the population. <br /><br /> <em>Null hypothesis of no average causal effect</em>: <br /> E[Y^(a=1)] = E[Y^(a=0)] <br /><br /> <em>Mathematical representations of causal null</em>: <br /> Pr[Y^(a=1)=1] - Pr[Y^(a=0)=1] = 0 <br /> or <br /> \(\frac{Pr[Y^{a=1}=1]}{Pr[Y^{a=0}=1]} = 1\) <br /> or <br /> \(\frac{Pr[Y^{a=1}=1]/Pr[Y^{a=1}=0]}{Pr[Y^{a=0}=1]/Pr[Y^{a=1}=0]} = 1\)</td>
<td>I.7</td>
</tr>
</tbody>
</table>
<h3 id="chapter-2-randomized-experiments">Chapter 2: Randomized Experiments</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Marginally randomized experiment</strong></td>
<td style="text-align: center">Single unconditional (marginal) randomization probability applied to assign treatments to all individuals in experiment. <br /><br /> Produces exchangeability of treated and untreated. <br /><br /> Values of counterfactual outcomes are <strong>missing completely at random (MCAR)</strong>.</td>
<td>I.18</td>
</tr>
<tr>
<td style="text-align: center"><strong>Conditionally randomized experiment</strong></td>
<td style="text-align: center">Randomized trial where study population is stratified by some variable \(L\), with different treatment probabilities for each stratum. <br /><br /> Needn’t produce <em>marginal exchangeability</em>, but produces <em>conditional exchangeability</em>. <br /><br /> Values of counterfactuals are <em>not</em> MCAR, but are <strong>missing at random (MAR)</strong> conditional on \(L\).</td>
<td>I.18</td>
</tr>
<tr>
<td style="text-align: center"><strong>Standardization</strong></td>
<td style="text-align: center">Calculate the <em>marginal</em> counterfactual risk from a <em>conditionally randomized experiment</em> by taking a weighted average over the stratum-specific risks. <br /><br /> Standardized mean: <br /><br /> \(\sum_l E[Y|L=l,A=a] \times Pr[L=l]\) <br /><br /> Causal risk ratio can be computed via standardization as follows: <br /><br /> \(\frac{Pr[Y^{a=1}=1]}{Pr[Y^{a=0}=1]} = \frac{\sum_l E[Y=1|L=l,A=1]\times Pr[L=l]}{\sum_l E[Y=1|L=l,A=1]\times Pr[L=l]}\)</td>
<td>I.19</td>
</tr>
<tr>
<td style="text-align: center"><strong>Inverse probability weighting</strong></td>
<td style="text-align: center">Given a conditionally randomized study population: <br /><img src="/assets/hernan_dags/2_1.png" width="300" /> <br /> We can invoke an assumption of conditional exchangeability given \(L\) to simulate the counterfactual in which everyone had received (or not received) the treatment: <br /> <img src="/assets/hernan_dags/2_2.png" width="300" /> <br />. The causal effect ratio can then be directly calculated by comparing <br /> \(Pr[Y^{a=1}=1]/Pr[Y^{a=0}=1]\) (in this example, it’s \(\frac{10/20}{10/20}=1\).) <br /><br /> By the same token, you can effectively double your population and create a hypothetical <em>pseudo-population</em> in which everyone had received both treatments: <br /> <img src="/assets/hernan_dags/2_3.png" width="300" /> <br /><br /> This process amounts to weighting each individual in the population by the inverse of the conditional probability of receiving the treatment she received (see formula on right above). Hence the name <em>inverse probability (IP) weighting</em>.</td>
<td>I.20</td>
</tr>
</tbody>
</table>
<h3 id="chapter-3-observational-studies">Chapter 3: Observational Studies</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Notation or Formula</th>
<th>English Definition</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Identifiability conditions</strong></td>
<td style="text-align: center">See below.</td>
<td>Sufficient conditions for conceptualizing an observational study as a randomized experiment. <br /><br /> Consist of: <br /> 1. Consistency <br /> 2. Exchangeability, and <br /> 3. Positivity.</td>
<td> </td>
<td>I.25</td>
</tr>
<tr>
<td style="text-align: center"><strong>Consistency</strong></td>
<td style="text-align: center">If \(A_i\) = \(a\), then \(Y_{i}^{a}=Y^{A_i}\) = \(Y_i\)</td>
<td>“The values of treatment under comparison correspond to well-defined interventions that, in turn, correspond to the versions of treatment in the data.” <br /><br /> Has two main components: <br /> 1. Precise specification of counterfactual outcomes Y^a, and <br /> 2. Linkage of counterfactual outcomes to observed outcomes.</td>
<td>Violated in an ill-defined intervention. <br /><br /> Examples: <br /> - Study looks at “heart transplant” but doesn’t look at protocols (e.g. which immunosuppresant is used). If effect varies between versions of treatment and protocols not equally distributed, could cause problems. <br /> - Study wants to look at “obesity”, but “non-obesity” lumps together non-obesity from exercise vs cachexia vs genes vs diet. Need to subset population or make assumption that specific source of non-obesity doesn’t impact outcome. (Assumption called <em>treatment-variation irrelevance</em> assumption.) <br /><br /> Not a testable assumption, relies on domain expertise.</td>
<td>I.31</td>
</tr>
<tr>
<td style="text-align: center"><strong>Exchangeability</strong> (aka exogeneity)</td>
<td style="text-align: center">Y^a \(\unicode{x2AEB}\) A for all \(a\) <br /><br /> or <br /><br /> Pr[Y^a=1 | A=1] = Pr[Y^a=1 | A=0] = Pr[Y^a=1]</td>
<td>“The treated, had they remained untreated, would have experienced the same average outcome as the untreated did, and vice versa.” <br /><br /> Essentially, this is the assumption of no unmeasured confounding.</td>
<td>Beware formula: Not the same as Y \(\unicode{x2AEB}\) A, which would mean treatment has no effect on outcome.</td>
<td>I.27</td>
</tr>
<tr>
<td style="text-align: center"><strong>Conditional exchangeability</strong></td>
<td style="text-align: center">Y^a \(\unicode{x2AEB}\) A | L for all a <br /><br /> or <br /><br /> Pr[Y^a=1 | A=1, L=1] = Pr[Y^a=1 | A=0, L=1] = Pr[Y^a=1] | L=1</td>
<td>“The conditional probability of receiving every value of treatment is randomized or depends only on measured covariates”</td>
<td>Think conditional RCT where assigment depends only on \(L\). <br /><br /> In observational studies, this is an untestable assumption, thus relies on domain expertise.</td>
<td>I.27</td>
</tr>
<tr>
<td style="text-align: center"><strong>Positivity</strong></td>
<td style="text-align: center">Pr[A=a | L=\(l\) ] > 0 for all values \(l\) with Pr[L=\(l\)] \(\neq\) 0 in the population of interest</td>
<td>“The conditional probability of receiving every value of treatment is greater than zero, i.e. positive.”</td>
<td>Aka “Experimental treatment assumption” <br /><br /> Example of positivity not holding: doctors always give heart transplants to patients in critical condition, eliminitating positivity from that stratum of an observational study. <br /><br /> Unlike exchangeability, positivity, <em>can</em> be empricially verified.</td>
<td>I.30</td>
</tr>
</tbody>
</table>
<h3 id="chapter-4-effect-modification">Chapter 4: Effect Modification</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Notation or Formula</th>
<th>English Definition</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Effect modification</strong> <br /> aka effect-measure modification</td>
<td style="text-align: center"><em>Additive effect modification</em>: <br /> E[Y^(a=1)-Y^(a=0) | M = 1] \(\neq\) E[Y^(a=1)-Y^(a=0) | M = 0] <br /><br /> <em>Multiplicative effect modification</em>: <br /> \(\frac{E[Y^{a=1} | M = 1]}{E[Y^{a=0} | M = 1]}\) \(\neq\) \(\frac{E[Y^{a=1}| M = 0]}{E[Y^{a=0}| M = 0]}\)</td>
<td>\(M\) is a modifier of the effect of \(A\) on \(Y\) when the average causal effect of \(A\) on \(Y\) varies across levels of \(M\).</td>
<td>The <em>null hypothesis of no average causal effect</em> does <em>not</em> necessarily imply the absence of effect modification (e.g. equal and oppositive effect modifications in men and women could cancel at the population level), but the <em>sharp null hypothesis of no causal effect</em> does imply no effect modicifaction. <br /><br /> We only count variables <em>unaffected by treatment</em> as effect modifiers. Similar variables that are effected by treatment are termed <strong>mediators</strong>.</td>
<td>I.41</td>
</tr>
<tr>
<td style="text-align: center"><strong>Qualitative effect modification</strong></td>
<td style="text-align: center"> </td>
<td>Average causal effects in different subsets of the population go in opposite directions.</td>
<td>In presence of qualitative effect modification, additive effect modification implies multiplicative effect modification, and vice versa. In absence of qualitative effect modification, it’s possible to have only additive or only multiplicative effect modification. <br /><br /> Effect modifiers are not necessarily assumed to play a causal role. To make this explicit, sometimes the terms <em>surrogate effect modifier</em> vs <em>causal effect modifier</em> are used, or you can play it even safer and refer to “effect heterogeneity across strata of \(M\).” <br /><br /> Effect modification is helpful, among other things, for (i) assessing transportability to new populations where \(M\) may have different prevalences, (ii) choosing subpopulations that may most benefit from treatment, and (iii) identifying mechanisms leading to outcome if modifiers are mechanistically meaningful (e.g. circumscision for HIV transmission).</td>
<td>I.42</td>
</tr>
<tr>
<td style="text-align: center"><strong>Stratification</strong></td>
<td style="text-align: center"><em>Statified causal risk differences</em>: <br /> E[Y^(a=1) | M = 1] - <br /> E[Y^(a=0) | M = 1] <br />and<br /> E[Y^(a=1) | M = 0] - <br /> E[Y^(a=0) | M = 0]</td>
<td>To <em>identify</em> effect modification by variable \(M\), separately compute the causal effect of \(A\) on \(Y\) for each statum of the variable \(M\).</td>
<td>If study design assumes conditional rather than marginal exchangeability, analysis to estimate effect modification must account for all other variables \(L\) required to give exchangeability. This could involve standardization (IP weighting, etc.) by \(L\) within each stratum \(M\), or just using finer-grained stratification over all pairwise combinations of \(M\) and \(L\) (see page I.49). <br /><br /> By the same token, stratification can be an alternative to standardization techinques such as IP weighting in analysis of any conditional randomized experiment : instead of estimating an average causal effect over the population while standardizing for \(L\), just stratify on \(L\) and report separate causal effect estimates for each stratum.</td>
<td>I.43-49</td>
</tr>
<tr>
<td style="text-align: center"><strong>Collapsibility</strong></td>
<td style="text-align: center"> </td>
<td>A characteristic of a population <em>effect measure</em>. Means that the effect measure can be expressed as a weighted average of stratum-specific measures.</td>
<td>Examples of collapsible effect measures: risk ratio and risk difference <br /><br /> Example of non-collapsible effect measure: odds ratio. <br /><br /> Noncollapsibility can produce counter-intuitive findings like a causal odds ratio that’s smaller in the average population than in any stratum of the population.</td>
<td>I.53</td>
</tr>
<tr>
<td style="text-align: center"><strong>Matching</strong></td>
<td style="text-align: center"> </td>
<td>Construct a subset of the population in which all variables \(L\) have the same distribution in both the treated and the untreated.</td>
<td>Under assumption of conditional exchangeability given \(L\) in the source population, a matched population will have unconditional exchangeability. <br /><br /> Usually, constructed by including all of the smaller group (e.g. the treated) and selecting one member of the larger group (e.g. the untreated) with matching \(L\) for each member in the smaller group. Often requires approximate matching.</td>
<td>I.49</td>
</tr>
<tr>
<td style="text-align: center"><strong>Interference</strong></td>
<td style="text-align: center"> </td>
<td>Treatment of one individual effects treatment status of other individuals in the population.</td>
<td>Example: A socially active individual convinces friends to join him while exercising.</td>
<td>I.48</td>
</tr>
<tr>
<td style="text-align: center"><strong>Transportability</strong></td>
<td style="text-align: center"> </td>
<td>Ability to use causal effect estimation from one population in order to inform decisions in another (“target”) population. <br /><br /></td>
<td>Requires that the target population is characterized by comparable patterns of: <br /> - Effect modification <br /> - Interference, and <br /> - Versions of treatment</td>
<td>I.48</td>
</tr>
</tbody>
</table>
<h3 id="chapter-5-interaction">Chapter 5: Interaction</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Notation or Formula</th>
<th>English Definition</th>
<th>Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Joint counterfactual</strong></td>
<td style="text-align: center">Y^(a,e)</td>
<td>Counterfactual outcome that would have been observed if we had intervented to set the individual’s values of \(A\) (treatment component 1) to \(a\) and \(E\) (treatment component 2) to \(e\).</td>
<td> </td>
<td>I.55</td>
</tr>
<tr>
<td style="text-align: center"><strong>Interaction</strong></td>
<td style="text-align: center"><em>Interaction on the additive scale</em>: <br /> Pr[Y^(a=1,e=1)=1] - Pr[Y^(a=0,e=1)=1] \(\neq\) Pr[Y^(a=1,e=0)=1] - Pr[Y^(a=0,e=0)=1] <br /> <br /> or <br /> <br /> Pr[Y^(a=1) = 1 | E=1 ] - Pr[Y^(a=0) = 1 | E=1 ] \(\neq\) Pr[Y^(a=1) = 1 | E=0 ] - Pr[Y^(a=0) = 1 | E=0]</td>
<td>The causal effect of \(A\) on \(Y\) after a joint intervention that set \(E\) to 1 differs from the causal effect of \(A\) on \(Y\) after a joint intervention that set \(E\) to 0. (Definition also holds if you swap \(A\) and \(E\).)</td>
<td>Different from effect modification because an effect modifier \(M\) is not considered a treatment or otherwise a variable on which we can intervene. In interaction, interventions \(A\) and \(E\) have equal status. <br /><br /> Note from definition 2 on the left, however, that the mathematical definitions of effect modification and interaction line up. This means that if you <em>randomize</em> an interactor, it becomes equivalent to an effect modifier. <br /><br /> Inference over joint counterfactuals require that the identifying conditions of exchangeability, positivity, and consistency hold for <em>both</em> treatments.</td>
<td>I.55</td>
</tr>
<tr>
<td style="text-align: center"><strong>Counterfactual response type</strong></td>
<td style="text-align: center"> </td>
<td>A characteristic of an <em>individual</em> that refers to how she will respond to a treatment.</td>
<td>For example, an individual may have the same counterfactual outcome regardless of treatment, be helped by the treatment, or be hurt by the treatment. <br /><br /> The presence of an interaction between \(A\) and \(E\) implies that some individuals exist such that their counterfactual outcomes under \(A=a\) cannot be determined without knowledge of \(E\).</td>
<td>I.58</td>
</tr>
<tr>
<td style="text-align: center"><strong>Sufficient-component causes</strong></td>
<td style="text-align: center"> </td>
<td>A set of variables that are sufficient to determine the outcome for a specific individual.</td>
<td>The minimal set of sufficient causes can be different for distinct ndividuals in the same study. For example, a patient with background factor \(U_1\) might have the same outcome regardless of treatment, whereas another patient’s outcome might be driven by both a treatment \(A\) and interactor \(E\). <br /><br /> Minimal sufficient-component causes are sometimes visualized with pie charts. <br /><br /> <em>Contrast between counterfactual outcomes framework and sufficient-component-cause framework:</em> <br /> <em>Sufficient outcomes framework</em> focuses on questions like: “given a particular effect, what are the various events which might have been its cause?” and <em>counterfactual outcomes framework</em> focuses on questions like: “what would have occurred if a particular factor were intervened upon and set to a different level than it was?”. Sufficient-component-causes requires more detailed mechanistic knoweldge and is generally more a pedagological tool than a data analysis tool.</td>
<td>I.61</td>
</tr>
<tr>
<td style="text-align: center"><strong>Sufficient cause interaction</strong></td>
<td style="text-align: center"> </td>
<td>A sufficient cause interaction between \(A\) and \(E\) exists in a population if \(A\) and \(E\) occur together in a sufficient cause.</td>
<td>Can be <em>synergistic</em> (A = 1 and E = 1 present in sufficient cause) or <em>antagonistic</em> (e.g. A = 1 and E = 0 is present in sufficient cause) .</td>
<td>I.64</td>
</tr>
</tbody>
</table>
<h3 id="chapter-6--causal-diagrams">Chapter 6: Causal Diagrams</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Term</th>
<th style="text-align: center">Definition</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Path</strong></td>
<td style="text-align: center">A path on a DAG is a sequence of edges connecting two variables on the graph, with each edge occurring only once.</td>
<td>I.76</td>
</tr>
<tr>
<td style="text-align: center"><strong>Collider</strong></td>
<td style="text-align: center">A collider is a variable in which two arrowheads on a path collide. <br /> <br /> For example, \(Y\) is a collider in the path \(A \rightarrow Y \leftarrow L\) in the following DAG: <br /> <img src="/assets/hernan_dags/6_1.png" width="150" /></td>
<td>I.76</td>
</tr>
<tr>
<td style="text-align: center"><strong>Blocked path</strong></td>
<td style="text-align: center">A path on a DAG is blocked if and only if: <br />1. it contains a noncollider that has been conditioned, or <br /> 2. it contains a collider that has not been conditioned on and has no descendants that have been conditioned on.</td>
<td>I.76</td>
</tr>
<tr>
<td style="text-align: center"><strong>d-separation</strong></td>
<td style="text-align: center">Two variables are d-separated if all paths between them are blocked</td>
<td>I.76</td>
</tr>
<tr>
<td style="text-align: center"><strong>d-connectedness</strong></td>
<td style="text-align: center">Two variables are d-connected if they are not d-separated</td>
<td>I.76</td>
</tr>
<tr>
<td style="text-align: center"><strong>Faithfulness</strong></td>
<td style="text-align: center">Faithulness is when all non-null associations implied by a causal diagram exist in the true causal DAG. Unfaithfulness can arise, for example, in certain settings of effect modification, by design as in matching experiments, or in the presence of certain deterministic relations between variables in the graph.</td>
<td>I.77</td>
</tr>
<tr>
<td style="text-align: center"><strong>Positivity</strong> (on graphs)</td>
<td style="text-align: center">The arrows from the nodes \(L\) to the treatment node \(A\) are not deterministic. (Concerned with nodes <em>into</em> treatment nodes)</td>
<td>I.75</td>
</tr>
<tr>
<td style="text-align: center"><strong>Consistency</strong> (on graphs)</td>
<td style="text-align: center">Well-defined intervention criteria: the arrow from treatment \(A\) to outcome \(Y\) corresponds to a potentially hypothetical but relatively unambiguous intervention. (Concerned with nodes <em>leaving</em> the treatment nodes.)</td>
<td>I.75</td>
</tr>
<tr>
<td style="text-align: center"><strong>Systematic bias</strong></td>
<td style="text-align: center">The data are insuffient to identify the causal effect even withan infinite sample size. This occurs when any sturctural association between treatment and outcome does not arise from the causal effect of treatment on outcome in the population of interest.</td>
<td>I.79</td>
</tr>
<tr>
<td style="text-align: center"><strong>Conditional bias</strong></td>
<td style="text-align: center"><em>For average causal effects within levels of \(L\)</em>: <br /> Conditional bias exists whenever the effect measure (e.g. causal risk ratio) and the corresponding association measure (e.g. associational risk ratio) are not equal.<br /> Mathematically, this is when: <br /> \(Pr[Y^{a=1} | L = l] - Pr[Y^{a=0} | L = l]\) differs from \(Pr[Y|L=l, A = 1] - Pr[Y|L-l, A=0]\) for at least one stratum \(l\).<br /><br /> <em>For average causal effects in the entire population</em>: <br /> Conditional bias exists whenever <br /> \(Pr[Y^{a=1} ] - Pr[Y^{a=0}]\) \(\neq\) \(Pr[Y = 1| A = 1] - Pr[Y = 1 | A = 0]\).</td>
<td>I.79</td>
</tr>
<tr>
<td style="text-align: center"><strong>Bias under the null</strong></td>
<td style="text-align: center">When the null hypothesis of no causal effect of treatment on the outcome holds, but treatment and outcome are associated in the data. <br /><br />Can be from either confounding, selection bias, or measurement error..</td>
<td>I.79</td>
</tr>
<tr>
<td style="text-align: center"><strong>Confounding</strong></td>
<td style="text-align: center">The treatment and outcome share a common cause.</td>
<td>I.79</td>
</tr>
<tr>
<td style="text-align: center"><strong>Selection bias</strong></td>
<td style="text-align: center">Conditioning on common effects.</td>
<td>I.79</td>
</tr>
<tr>
<td style="text-align: center"><strong>Surrogate effect modifier</strong></td>
<td style="text-align: center">An effect modifier that does not dirrectly influence that outcome but might stand in for a <strong>causal effect modifier</strong> that does.</td>
<td>I.81</td>
</tr>
</tbody>
</table>
<h3 id="chapter-7--confounding">Chapter 7: Confounding</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Concept</th>
<th style="text-align: center">Definition or Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Backdoor Path</strong></td>
<td style="text-align: center">A noncausal path between treatment and outcome that remains even if all arrows pointing from treatment to other variables (the descendants of treatment) are removed. That is, the path has an arrow pointing into treatment.</td>
<td>I.83</td>
</tr>
<tr>
<td style="text-align: center"><strong>Confounding by indication</strong> (or <strong>Channeling</strong>)</td>
<td style="text-align: center">A drug is more likely to be prescribed to individuals with a certain condition that is both an indication for treatment and a risk factor for the disease.</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><strong>Channeling</strong></td>
<td style="text-align: center">Confounding by indication in which patient-specific risk factors \(L\) encourage doctors to use certain drug \(A\) within a class of drugs.</td>
<td>I.84</td>
</tr>
<tr>
<td style="text-align: center"><strong>Backdoor Criterion</strong></td>
<td style="text-align: center">Assuming consistency and positivity, the <em>backdoor criterion</em> sets the circumstances under which (a) confounding can be eliminated from the analysis, and (b) a causal effect of treatment on outcome can be identified. <br /><br /> Criterion is that identifiability exists if all backdoor paths can be blocked by conditioning on variables that are not affected by the treatment. <br /><br /> The two settings in which this is possible are: <br /><br /> 1. No common causes of treatment and outcome. <br /><br /> 2. Common causes but enough measured variables to block all colliders.</td>
<td>I.85</td>
</tr>
<tr>
<td style="text-align: center"><strong>Single-world intervention graphs (SWIG)</strong></td>
<td style="text-align: center">A causal diagram that unifies counterfactual and graphical approaches by explicitly including the counterfactual variables on the graph. <br /><br /> Depicts variables and causal relations that would be observed in a hypothetical world in which all individuals received treatment level \(a\). In other words, is a <em>graph</em> that represents the counterfactual <em>world</em> created by a <em>single intervention</em>, unlike normal DAGs that represent variables and causal relations from the actual world.</td>
<td>I.91</td>
</tr>
<tr>
<td style="text-align: center"><strong>Two categories of methods for confounding adjustment</strong></td>
<td style="text-align: center"><strong>G-Methods</strong>:<br /> G-formula, IP weighting, G-estimation. Exploit conditional exchangeability in subsets defined by \(L\) to estimate the causal effect of \(A\) on \(Y\) in the entire population or in any subset of the population. <br /><br /> <strong>Stratification-based Methods</strong>: Stratification, Restriction, Matching. Methods that exploit conditional exchangeability in subsets defined by \(L\) to estimate the association between \(A\) and \(Y\) in those subsets only.</td>
<td>I.93</td>
</tr>
<tr>
<td style="text-align: center"><strong>Difference-in-differences</strong> and <strong>negative outcome controls</strong></td>
<td style="text-align: center">A technique to account for unmeasured confounders under specific conditions. <br /><br /> The idea is to measure a “negative outcome control”, which is the same as the main outcome but <em>right before treatment</em>. Then, instead of just reporting the effect of the treatment on the <em>outcome</em> (treatment effect + confounding effect), you substract out the effect of treatment on the <em>negative outcome</em> (only confounding effect). What’s left is is the <em>difference-in-differences</em>. <br /><br /> This requires the assumption of <em>additive equi-confounding</em>: <br /> \(E[Y^{0}|A=1] - E[Y^{0}|A=0]\) = \(E[C|A=1] - E[C|A=0]\). <br /><br /> Negative outcome controls are also sometimes used to try to <em>detect</em> confounding. <br /><br /> Note: The DAG demonstration (Figure 7.11) is really useful for this one.</td>
<td>I.95</td>
</tr>
<tr>
<td style="text-align: center"><strong>Frontdoor criterion</strong> and <strong>Frontdoor adjustment</strong></td>
<td style="text-align: center">A two-step standardization process to estimate a causal effect in the presence of a confounded causal effect <em>that is mediated by an unconfounded mediator variable</em>. <br /><br /> Given a DAG such as: <br /> <img src="/assets/hernan_dags/7_12.png" width="150" /> <br /> \(Pr[Y^{a}=1] = \sum_{m}Pr[M^{a}=m]Pr[Y^{m}=1]\). <br /><br /> Thus, standardization can be applied in two steps: <br /><br /> 1. Compute \(Pr[M^{a}=m]\) as \(Pr[M=m| A=a]\), and <br /><br /> 2. Compute \(Pr[Y^{a}=1]\) as \(\sum_{a'}Pr[Y=1|M=m,A=a']Pr[A=a']\) <br /><br /> These are then combined with the formula <br /> \(\sum_{m}Pr[M=m| A=a]\sum_{a'}Pr[Y=1|M=m,A=a']Pr[A=a']\) <br /><br /> The name <em>frontdoor adjustment</em> comes because it relies on the path from \(A\) and \(Y\) moving through a descendent \(M\) of \(A\) that causes \(Y\).</td>
<td>I.96</td>
</tr>
</tbody>
</table>
<h3 id="chapter-8--selection-bias">Chapter 8: Selection Bias</h3>
<p><strong>Note</strong>: I have almost no notes in here, because the DAG section contains pretty much all the content I’m interested in noting here.</p>
<table>
<thead>
<tr>
<th style="text-align: center">Concept</th>
<th style="text-align: center">Definition or Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Competing Event</strong></td>
<td style="text-align: center">An event that prevents the outcome of interest from happening. For example, death is a competing event, because once it occurs, no other outcome is possible.</td>
<td>I.108</td>
</tr>
<tr>
<td style="text-align: center"><strong>Multiplicative survival model</strong></td>
<td style="text-align: center">A multiplicative survival model is of the form: <br /><br /> \(Pr[Y=0|E=e,A=a]=g(e)h(a)\) <br /><br /> . The data forllow such a model when there is no interaction between \(A\) and \(E\) on a multiplicative scale. This allows, for example, \(A\) and \(E\) to be conditionally independent given \(Y=0\) but not conditionally dependent when \(Y=1\). See Technical Point 8.2 and the example in Figure 8.13.</td>
<td>I.109</td>
</tr>
<tr>
<td style="text-align: center"><strong>Healthy worker bias</strong></td>
<td style="text-align: center">Example of selection bias where people are only included in the study if they are healthy enough, say, to come into work and be tested.</td>
<td>I.99</td>
</tr>
<tr>
<td style="text-align: center"><strong>Self-selection bias</strong></td>
<td style="text-align: center">Example of selection bias where people volunteer for enrollment.</td>
<td>I.100</td>
</tr>
</tbody>
</table>
<h3 id="chapter-9--measurement-bias">Chapter 9: Measurement Bias</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Concept</th>
<th style="text-align: center">Definition or Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Measurement bias</strong> or <strong>Information bias</strong></td>
<td style="text-align: center">Systematic difference in associational risk and causal risk that arises due to measurement error. Eliminates causal inference even under identifiability conditions of exchangeability, positivity, and consistency.</td>
<td>I.112</td>
</tr>
<tr>
<td style="text-align: center">__ Independent measurement error __</td>
<td style="text-align: center">Independent measurement error takes place when the measurement error of the treatment (\(U_{A}\)) and the measurement error of the response (\(U_{Y}\)) are d-separated. Dependent measurement error is when they are d-connected.</td>
<td>I.11</td>
</tr>
<tr>
<td style="text-align: center">__ Nondifferential measurement error __</td>
<td style="text-align: center">Measurement error is <em>nondifferential</em> with respect to the outcome if \(U_{A}\) and \(Y\) are d-separated. Measurement error is nondifferential with respect to the treatment if \(U_{Y}\) and \(A\) are d-separated.</td>
<td>I.11</td>
</tr>
<tr>
<td style="text-align: center"><strong>Intention-to-treat effect</strong></td>
<td style="text-align: center">The causal effect of randomized treatment assigment \(Z\) in an intention-to-treat trial on the outcome \(Y\). Depends on the strength of the effect of assignment treatment on outcome (\(Z \rightarrow Y\)), the assignment treatment on actual treatment received (\(Z \rightarrow A\)), and the effect of the actual treatment received on outcome (\(A \rightarrow Y\)). In theory, this does not require adjustment for confounding, has null preservation, and is conservative. See below for comments on latter two.</td>
<td>I.116</td>
</tr>
<tr>
<td style="text-align: center"><strong>The exclusion restriction</strong></td>
<td style="text-align: center">(The goal of double-blinding). The assumption that there is no direct arrow from assigned treatment \(Z\) to outcome \(Y\) in an intention-to-treat design.</td>
<td>I.117</td>
</tr>
<tr>
<td style="text-align: center"><strong>Null Preservation</strong> in an IIT</td>
<td style="text-align: center">If treatment \(A\) has a null effect on \(Y\), then assigned treatment \(Z\) also has a null effect on \(Y\). Ensure, in theory, that a null effect will be declared when none exists. However, it requires that the exclusion restriction holds, which breaks down unless their is perfect double-blinding.</td>
<td>I.119</td>
</tr>
<tr>
<td style="text-align: center"><strong>Conservatism of the IIT vs Per-protocol</strong></td>
<td style="text-align: center">The IIT effect is supposed to be closer to the null than the value of the per-protocol effect, because imperfect adherence results in attenuation rather than exaggeration of effect. Thus IIT appears to be a lower bound for per-protocol effect (and is thus conservative). <br /><br /> However, there are three issues with this: <br /> 1. Argument assumes monotonicity of effects (treatment same direction for all patients). If, say, there is inconsistent adherence and thus inconsistent effects, then this could become anti-conservative. <br /> 2. Even given monotonicity, IIT would only be conservative compared to <em>placebos</em>, not necessarily head-to-head trials, where adherence in the second drug might be different. <br /> 3. Even if IIT is conservative, this makes it dangerous when goal is evaluating safety, where you arguably want to be more <em>aggresive</em> in finding signal.</td>
<td> </td>
</tr>
<tr>
<td style="text-align: center"><strong>Per-protocol effect</strong></td>
<td style="text-align: center">The causal effect of randomized treatment that would have been observed if all individuals had adhered to their assigned treatment as specified in the protocol of the experiment. <em>Requires adjustment for confounding</em>.</td>
<td>I.116</td>
</tr>
<tr>
<td style="text-align: center"><strong>As-treated analysis</strong></td>
<td style="text-align: center">An analysis to assess for per-protocol effect. Includes <em>all patients</em> and compares those treated (\(A=1\)) vs not treated (\(A=0\)), independent of their assignment \(Z\). Confounded.</td>
<td>I.118</td>
</tr>
<tr>
<td style="text-align: center"><strong>Conventional per-protocol analysis</strong></td>
<td style="text-align: center">An analysis to assess for per-protocol effect. Limits the population to those who adhered to the study protocol, subsetting to those for whom \(A=Z\). Induces a <em>selection bias</em> on \(A=Z\), and thus still requires adjustment on \(L\).</td>
<td>I.118</td>
</tr>
<tr>
<td style="text-align: center"><strong>Tradeoff between ITT and Per-protocol</strong></td>
<td style="text-align: center">Summary: Estimating the per-protocol effect adds unmeasured confounding, which needs to be (imperfectly) adjusted for. Intention-to-treat adds a misclassification bias, and does not necessarily deliver on purported guarantees of conservatism. As such, there is a real tradeoff, here.</td>
<td>I.117-I.120</td>
</tr>
</tbody>
</table>
<h3 id="chapter-10--random-variability">Chapter 10: Random Variability</h3>
<p>Sorry, I’m skipping this section, because the key terms are all stats concepts and its mostly a pump-up chapter for the rest of the book.</p>
<h3 id="chapter-11-why-model">Chapter 11: Why Model?</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Concept</th>
<th style="text-align: center">Definition or Notes</th>
<th>Page</th>
<th> </th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Saturated Models</strong></td>
<td style="text-align: center">Models that do not impose restrctions on the data distribution. Generally, these are models whose number of parameters in a conditional mean model is equal to the number of means. For example, a linear model E[ y</td>
<td>x] ~ b0 + b1*x when the population is stratified into only two groups. These are <em>non-parametric models</em>.</td>
<td>II.143</td>
</tr>
<tr>
<td style="text-align: center"><strong>Non-parametric estimator</strong></td>
<td style="text-align: center">Estimators that produce estimates from the data without any a priori restrictions on the true function. When using the entire population rather than a sample, these yield the true value of the population parameter.</td>
<td>II.143</td>
<td> </td>
</tr>
</tbody>
</table>
<h3 id="chapter-12">Chapter 12:</h3>
<table>
<thead>
<tr>
<th style="text-align: center">Concept</th>
<th style="text-align: center">Definition or Notes</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center"><strong>Stabilized Weights</strong></td>
<td style="text-align: center"> </td>
<td> </td>
</tr>
<tr>
<td style="text-align: center"><strong>Marginal Structure Model</strong></td>
<td style="text-align: center"> </td>
<td> </td>
</tr>
</tbody>
</table>
<p>To-do:
| Concept | Formula | Code | Notes |
| :———–: |:————-:|
| <strong>IP Weighting</strong> | | |
| <strong>Standardized IP Weighting</strong> | | |
| <strong>Marginal Structure model</strong> | | |</p>
<p>DONT MISS THE DOUBLY ROBUST ESTIMATOR in TECHNICAL POINT 13.2</p>
Wed, 19 Jun 2019 07:50:00 -0400
http://sgfin.github.io/2019/06/19/Causal-Inference-Book-Glossary-and-Notes/
http://sgfin.github.io/2019/06/19/Causal-Inference-Book-Glossary-and-Notes/FAQ on Medical Adversarial Attacks Policy Paper<ul id="markdown-toc">
<li><a href="#whats-the-paper-and-why-this-faq" id="markdown-toc-whats-the-paper-and-why-this-faq">What’s the paper and why this FAQ?</a></li>
<li><a href="#do-you-think-adversarial-attacks-are-the-biggest-concern-in-using-machine-learning-in-healthcare-a-nowhere-close-then-why-write-the-paper" id="markdown-toc-do-you-think-adversarial-attacks-are-the-biggest-concern-in-using-machine-learning-in-healthcare-a-nowhere-close-then-why-write-the-paper">Do you think adversarial attacks are the biggest concern in using machine learning in healthcare? (A: Nowhere close!) Then why write the paper?</a></li>
<li><a href="#there-seems-to-have-been-something-of-a-pivot-between-the-preprint-and-the-policy-forum-discussion-with-the-latter-focusing-much-less-on-images--was-this-intentional" id="markdown-toc-there-seems-to-have-been-something-of-a-pivot-between-the-preprint-and-the-policy-forum-discussion-with-the-latter-focusing-much-less-on-images--was-this-intentional">There seems to have been something of a pivot between the preprint and the policy forum discussion, with the latter focusing much less on images. Was this intentional?</a></li>
<li><a href="#in-the-paper-you-frame-existing-examples-like-upcoding-and-claims-craftsmanship-as-adversarial-attacks-or-at-least-as-their-precursors--is-that-fair" id="markdown-toc-in-the-paper-you-frame-existing-examples-like-upcoding-and-claims-craftsmanship-as-adversarial-attacks-or-at-least-as-their-precursors--is-that-fair">In the paper, you frame existing examples like upcoding and claims craftsmanship as adversarial attacks, or at least as their precursors. Is that fair?</a></li>
<li><a href="#isnt-this-unrealistic--i-mean-would-there-ever-be-cases-when-someone-actually-uses-adversarial-examples" id="markdown-toc-isnt-this-unrealistic--i-mean-would-there-ever-be-cases-when-someone-actually-uses-adversarial-examples">Isn’t this unrealistic? I mean, would there ever be cases when someone <em>actually</em> uses adversarial examples?</a></li>
<li><a href="#adversarial-attacks-sounds-scary--do-you-think-people-will-use-these-as-tools-to-hurt-people-by-hacking-diagnostics-etc" id="markdown-toc-adversarial-attacks-sounds-scary--do-you-think-people-will-use-these-as-tools-to-hurt-people-by-hacking-diagnostics-etc">“Adversarial attacks” sounds scary. Do you think people will use these as tools to hurt people by hacking diagnostics, etc?</a></li>
<li><a href="#are-you-hoping-to-stall-the-development-of-medical-ml-because-of-adversarial-attacks" id="markdown-toc-are-you-hoping-to-stall-the-development-of-medical-ml-because-of-adversarial-attacks">Are you hoping to stall the development of medical ML because of adversarial attacks?</a></li>
<li><a href="#small-note-on-the-figure" id="markdown-toc-small-note-on-the-figure">Small note on the figure</a></li>
</ul>
<h3 id="whats-the-paper-and-why-this-faq">What’s the paper and why this FAQ?</h3>
<p>Last spring, some colleages (chiefly Andy Beam) and I released a <a href="https://arxiv.org/pdf/1804.05296.pdf">preprint</a> on adversarial attacks on medical computer visions systems. This manuscript was targeted at a technical audience. It was written with the goal of explaining why adversarial attacks researchers should consider healthcare applications among their threat models, as well as to provide a few technical examples as a proof of concept. I ended up getting a lot of great feedback/pushback via email and twitter, which I really appreciated and which informed an update of the preprint on arxiv.</p>
<p>After the article was released, we were also put in touch with <a href="https://hls.harvard.edu/faculty/directory/10992/Zittrain">Jonathan Zittrain</a> and John Bowers from Harvard Law School as well as <a href="https://www.media.mit.edu/people/joi/overview/">Joi Ito</a> of the MIT Media Lab. These are incredibly thoughtful people with a lot of amazing experience. We decided to write a follow-up article targeted more at medical and policy folks, with the intention of examining precedence for adversarial attacks in the healthcare system as it exists today and initiating a conversation about what to do about them going forward. The result is being published today in Science, <a href="http://science.sciencemag.org/content/363/6433/1287">here</a>. It’s been an absolute pleasure working with these guys.</p>
<p>We really tried hard to be thoughtful and measured. Given the nature of the topic, however, I’ve been fretting a bit that the paper will be misconstrued/taken out of context. At a minimum, I anticipate getting a lot of the same questions I got the first time around on the preprint, and figured it’d be easier to write up answers to these in one place. The paper is short and non-technical enough that it doesn’t really need a blog post/explainer per se, so I opted to go with a “FAQ.” Hope it’s not too obnoxious.</p>
<h3 id="do-you-think-adversarial-attacks-are-the-biggest-concern-in-using-machine-learning-in-healthcare-a-nowhere-close-then-why-write-the-paper">Do you think adversarial attacks are the biggest concern in using machine learning in healthcare? (A: Nowhere close!) Then why write the paper?</h3>
<p>Adversarial attacks consitute just one small part of a large taxonomy of potential pitfalls of machine learning (both ML in general and medical ML in particular).</p>
<p>When I think about points of failure of medical machine learning, I think first about things like: <a href="https://arxiv.org/abs/1811.12583">dataset shift</a>, accidentally fitting <a href="https://arxiv.org/abs/1811.03695">confounders</a> or healthcare <a href="https://www.bmj.com/content/361/bmj.k1479">dynamics</a> instead of true signal, <a href="http://papers.nips.cc/paper/7613-why-is-my-classifier-discriminatory.pdf">discriminatory bias</a>, <a href="https://www.theatlantic.com/technology/archive/2018/09/the-new-apple-watchs-heart-monitoring-is-complicated/570115/">overdiagnosis</a>, or job displacement. Especially given recent challenges in getting ML to <a href="https://www.thelancet.com/action/showPdf?pii=S2589-5370%2819%2930037-9">generalize to new populations</a>, there are also uncomfortable questions to ask about when and how we can be sure we’re ready to deploy a ML algorithm in a new patient population.</p>
<p>While all of these issues may have general implications for policy, the way I think about them most is in context of how they inform our evaluations of individual ML systems. Each of the above issues demands that specific questions be asked of the systems that we’re evaluating. Questions like: what population was this model fit on, and how does it compare to the population the system will be used in? How could the data I’m feeding this algorithm have changed in the time since the model was developed? Have we thought carefully about the workflow so these algorithms are getting applied to patients with the right priors and the healthcare providers know how to properly act upon positive tests when the time comes?</p>
<p>Our main goal in this work was, in many ways, simply to point out that adversarial attacks at least deserve ackowledgement as one of these potential pitfalls. Questions this reality might prompt us to ask when evaluating a specific system include: Is there a mismatch in incentives between the person developing/hosting the algorithm and the person sending data into that algorithm? If so, are we prepared for the fact that those providing data to the algorithm might try to intentionally craft that data to achieve the results they want? If we decide to try to use models more robust to adversarial attacks, to what extent are we comfortable trading off accuracy in order to do so?</p>
<p>In many application settings, the answer to the incentives question may simply be “no.” But I don’t think that’s necessarily the case for all possible applications of machine learning in healthcare. To boot, we as authors have been slightly disconcerted by the fact that when speaking to high-level decision makers at hospitals, insurance companies, and elsewhere who are investing heavily in ML, they generally aren’t even aware of the existance of adversarial examples. So it’s really that mismatch in awareness relative to other pitfalls of ML that prompted the paper, even if in the grand scheme of things adversarial attacks are just one piece of a very large pie.</p>
<p>Finally – and perhaps most importantly – adversarial examples provide a proof-of-concept for a certain collection of issues with modern machine learning methods. More specificially, adversarial techniques help us assess the worst-case performance against new data distributions, and demonstrate that current models fail to encode key invariants in the classes that we trying to model. This has implications not just for the susceptibility of these algorithms to manipulation, but more fundamentally for our ability to trust these systems in <em>any</em> safety-critical settings. To boot, it does so in a way that is very tangible for researchers who are trying to design better models that <em>can</em> encode arbitrary invariants and whose behavior align exactly with how humans would want/expect them to behave. Alexander Madry calls this field of research “ML alignment,” which I think is a good phrase. (Addendum: Catherine Olsson has written <a href="https://medium.com/@catherio/unsolved-research-problems-vs-real-world-threat-models-e270e256bc9e">a great medium post</a> that makes many of these same points more thoughtfully and with more nuance. I highly recommend it if you’re interested in this topic.)</p>
<h3 id="there-seems-to-have-been-something-of-a-pivot-between-the-preprint-and-the-policy-forum-discussion-with-the-latter-focusing-much-less-on-images--was-this-intentional">There seems to have been something of a pivot between the preprint and the policy forum discussion, with the latter focusing much less on images. Was this intentional?</h3>
<p>Yes! Our <a href="https://arxiv.org/pdf/1804.05296.pdf">preprint</a> was geared toward a technical audience, and was largely motivated by a desire to get people who work on ML security/robustness research to start thinking about healthcare when considering attacks and defenses, rather than just things more native to the CS world like self-driving cars. At the time, the bulk of high-profile work – both in adversarial attacks and in medical ML – had been done in the computer vision space, so we decided to focus on this for our initial deep dive and in building our three proofs of concept.</p>
<p>As we thought a lot more deeply about the problem, however, we realized that we should probably expand our scope. The bulk of ML happening <em>today</em> in the healthcare industry isn’t in the form of diagnostics algorithms, but is being used internally at insurance companies to process claims directly for first-pass approvals/denials. And the best examples for existing adversarial attack-like behavior takes place in context of providers manipulating these claims. These provide a jumping off point to understand a spectrum of emerging motivations for adversarial behavior across all aspects of the healthcare system and across many different forms of ML. (See the next section on this as well.)</p>
<h3 id="in-the-paper-you-frame-existing-examples-like-upcoding-and-claims-craftsmanship-as-adversarial-attacks-or-at-least-as-their-precursors--is-that-fair">In the paper, you frame existing examples like upcoding and claims craftsmanship as adversarial attacks, or at least as their precursors. Is that fair?</h3>
<p>I think so. The paper “adversarial classification” from KDD ‘04 even talks specifically about fraud detection along with spam and other applications of adversarial attacks.</p>
<p>For a few years, the adversarial examples community focused really heavily on <em>human-imperceptible</em> changes to <em>images,</em> usually computed using <em>gradient</em> tricks. But more recently, I think the community has (appropriately) returned to defining adversarial attacks as any method employed to craft one’s data to influence the behavior of an ML algorithm that processes it. As <a href="https://arxiv.org/pdf/1807.06732.pdf">Gilmer et al</a> say, “what is important about an adversarial example is that an adversary supplied it, not that the example itself is somehow special.” Such framings of the problem allow even for natural data identified through simple techniques like guess-and-check and grid search to be adversarial examples, so long as they are used with adversarial intent, and indeed some recent papers in major CS venues have employed such techinques.</p>
<p>At present, the adversarial behavior in context of things like medical claims appears to be limited to providers stumbling upon or essentially guess-and-checking combinations of codes that will provide higher rates of reimbursement/approval without commiting overt fraud. (Some <a href="https://jamanetwork.com/journals/jama/fullarticle/192577">studies like this one</a> have suggested a hefty cohort of physicians think that manipulating claims is even <em>necessary</em> in order to provide high-quality care.) In light of the last paragraph, I think you can make a reasonable case that this behavior itselft already constitues an adversarial attack on the ML systems used by insurance companies, though admittedly a fairly boring one from a technical point of view. But it may be getting more interesting. Hospitals invest <em>immense</em> resources in this process – up to <a href="https://jamanetwork.com/journals/jama/article-abstract/2673148?redirect=true">$99k per physician per year</a> – and I know for a fact that some providers are already investing heavily in software solutions to more explicitly optimize this stuff. Likewise, <a href="https://www.forbes.com/sites/insights-intelai/2019/02/11/how-ai-can-battle-a-beastmedical-insurance-fraud/#20fa437e363e">insurance companies</a> are doubling down on AI solutions to fraud detection, including processing not just claims but things like medical notes. Now that computer vision algorithms are starting to get <a href="https://www.fda.gov/newsevents/newsroom/pressannouncements/ucm604357.htm">FDA approved</a> for medical purposes, I think it’s also likely that payors and regulators will start leveraging this tech as well, which may lead to incentives for computer vision adversarial attacks, a hypothetical scenario at the center of our preprint.</p>
<p>In any event, the real motivation for the claims examples we focus on in the paper is not to call these out as adversarial attacks per se. Rather, it’s to demosntrate how motivations – both positive and negative – already exist in the healthcare system that motivate various players to subtly manipulate their data in order to achieve specific results. This is the soul of the adversarial attacks problem. As both the reach and sophistication of medical machine learning expands across the healthcare system, the techniques used to game these algorithms will likely expand significantly as well.</p>
<h3 id="isnt-this-unrealistic--i-mean-would-there-ever-be-cases-when-someone-actually-uses-adversarial-examples">Isn’t this unrealistic? I mean, would there ever be cases when someone <em>actually</em> uses adversarial examples?</h3>
<p>We got some really good and reasonable pushback on this point the first time around, and once again, I really appreciated it. (Partly) as a result, we’ve spent a lot more time the last few months thinking about the range of adversarial behavior in healthcare information exchange. We ended up shifting the focus a bit as a result. In any event, there’s a whole spectrum of threat models at play here.</p>
<p>Without rehashing the information about claims in the question just above this one too much, machine learning <em>is</em> being used pretty extensively (and moreso every day, at increasing sophistication) to make first-pass approvals on claims. And while this seems like a purely financial/bureaucratic concern, this process does already have a major impact on healthcare – at least in the U.S. – <em>today</em>. <a href="https://www.kevinmd.com/blog/2016/07/time-doctors-tell-insurance-companies-really-feel.html">Here is an example</a> of writing from a doctor that explains the level of frustration here, which is reflected of common experiences. What’s more, there is something more subtle here; when I speak with clinicians, most of them feel like they get no formal feedback from what’s happening under the hood at insurance so they don’t have any real rhyme or reason for what combination of claims it is that’s resulting in their denials. To boot, there are often many possible codes that could apply to any given procedure or diagnosis, and it’s a bit of a black box for which will be likely to receive pushback and which will get you the most reimbursement. Currently, most hospitals use extensive teams of human billers to manually try to do this process, but companies for automated billing exist, and I have personally spoken to physicians that are hoping to seek more sophisticated software solutions to more explicitly optimize their billing to avoid these “hurdles.” And since many insurance companies are already starting to use NLP on notes, that will open up a whole new layer of complexity in the process. In light of all this, I actually feel that the dynamics we describe in this paper are not unrealistic at all.</p>
<p>Where we do get (explicitly) hypothetical is when it comes to things like adversarial attacks on imaging systems. I don’t think these are that realistic today, because I can’t find examples of insurance companies or regulators using computer vision algorithms for approvals yet. But in fairness, the first FDA approval for a CV algorithm just happened in 2018 and many more are on the way. Once CV is established as “legit” I think it’s likely that we’ll see them get more integrated into such decisions. But we aren’t there yet. Of course, even when we do get there, the adversarial imaging threat model also requires users to feel comfortable sending in adversarial attacks but not straight up fake images from other patients. But I think that there are technical and – probably moreso – legal and moral reasons why physicians/companies would hesitate to send in overtly fraudulent images to a diagnostic algorithm at an insurance/regulatory body. In contrast, I think that many would be comfortable doing more subtle things like rotations/scalings or even just cherry-picking images that give them the best shot from the many images that are often acquired per patient. According to the recently published <a href="https://arxiv.org/pdf/1807.06732.pdf">Rules of the Game</a>, this type of behavior “counts” as adversarial attacks according at least to many in the field. To boot, doing this effectively (and surely being robust to it) could entail advanced software even if the modifications themselves are simple. In other words, I continue to think that robustness/adversarial attacks researchers should take healthcare seriously as an area of application.</p>
<h3 id="adversarial-attacks-sounds-scary--do-you-think-people-will-use-these-as-tools-to-hurt-people-by-hacking-diagnostics-etc">“Adversarial attacks” sounds scary. Do you think people will use these as tools to hurt people by hacking diagnostics, etc?</h3>
<p>While this is may be possible in certain circumstances in theory, I don’t think it’s particularly likely. By analogy, <a href="https://www.wired.com/story/pacemaker-hack-malware-black-hat/">pacemaker hacks</a> have been around for more than a decade, but I don’t see many people feeling motivated to execute them.</p>
<h3 id="are-you-hoping-to-stall-the-development-of-medical-ml-because-of-adversarial-attacks">Are you hoping to stall the development of medical ML because of adversarial attacks?</h3>
<p>Nope! Every author on this paper is very bullish on machine learning as a way to achieve positive impact in all aspects of the healthcare system. We explicitly state this in the paper, as well as the fact that we don’t think these concerns should slow things down, just be a part of an ongoing conversation.</p>
<h3 id="small-note-on-the-figure">Small note on the figure</h3>
<p>As will be immediately recognized by anyone familiar with adversarial examples, the design for the top part of Figure 1 was inspired by Figure 1 in <a href="https://arxiv.org/abs/1412.6572">Goodfellow et al</a> – though the noise itself was generated using a different attack method (the PGD) and applied to different data. As it stands, the figure in our Science paper points to our preprint for details of how the attack was generated, and Goodfellow et al paper is cited in the preprint. However, the Science paper itself doesn’t explicitly credit Goodfellow et al for the design idea. This wasn’t intentional. I pointed this out to the Science team, which decided against updating with a citation since the paper is cited via the preprint and all the actual content in the figure are either original or CC0. But I still feel bad about this. Sorry!</p>
Thu, 21 Mar 2019 00:00:00 -0400
http://sgfin.github.io/2019/03/21/FAQ-On-Adversarial-Science-Paper/
http://sgfin.github.io/2019/03/21/FAQ-On-Adversarial-Science-Paper/Deriving probability distributions using the Principle of Maximum Entropy<ul id="markdown-toc">
<li><a href="#introduction" id="markdown-toc-introduction">Introduction</a> <ul>
<li><a href="#maximum-entropy-principle" id="markdown-toc-maximum-entropy-principle">Maximum Entropy Principle</a></li>
<li><a href="#lagrange-multipliers" id="markdown-toc-lagrange-multipliers">Lagrange Multipliers</a></li>
</ul>
</li>
<li><a href="#1-derivation-of-maximum-entropy-probability-distribution-with-no-other-constraints-uniform-distribution" id="markdown-toc-1-derivation-of-maximum-entropy-probability-distribution-with-no-other-constraints-uniform-distribution">1. Derivation of maximum entropy probability distribution with no other constraints (uniform distribution)</a> <ul>
<li><a href="#satisfy--constraint" id="markdown-toc-satisfy--constraint">Satisfy constraint</a></li>
<li><a href="#putting-together" id="markdown-toc-putting-together">Putting Together</a></li>
</ul>
</li>
<li><a href="#2-derivation-of-maximum-entropy-probability-distribution-for-given-fixed-mean-mu-and-variance-sigma2-gaussian-distribution" id="markdown-toc-2-derivation-of-maximum-entropy-probability-distribution-for-given-fixed-mean-mu-and-variance-sigma2-gaussian-distribution">2. Derivation of maximum entropy probability distribution for given fixed mean \(\mu\) and variance \(\sigma^{2}\) (gaussian distribution)</a> <ul>
<li><a href="#satisfy-first-constraint" id="markdown-toc-satisfy-first-constraint">Satisfy first constraint</a></li>
<li><a href="#satisfy-second-constraint" id="markdown-toc-satisfy-second-constraint">Satisfy second constraint</a></li>
<li><a href="#putting-together-1" id="markdown-toc-putting-together-1">Putting together</a></li>
</ul>
</li>
<li><a href="#3-derivation-of-maximum-entropy-probability-distribution-of-half-bounded-random-variable-with-fixed-mean-barr-exponential-distribution" id="markdown-toc-3-derivation-of-maximum-entropy-probability-distribution-of-half-bounded-random-variable-with-fixed-mean-barr-exponential-distribution">3. Derivation of maximum entropy probability distribution of half-bounded random variable with fixed mean \(\bar{r}\) (exponential distribution)</a> <ul>
<li><a href="#satisfying-first-constraint" id="markdown-toc-satisfying-first-constraint">Satisfying first constraint</a></li>
<li><a href="#satisfying-the-second-constraint" id="markdown-toc-satisfying-the-second-constraint">Satisfying the second constraint</a></li>
<li><a href="#putting-together-2" id="markdown-toc-putting-together-2">Putting together</a></li>
</ul>
</li>
<li><a href="#4-maximum-entropy-of-random-variable-over-range-r-with-set-of-constraints-leftlangle-f_nxrightrangle-alpha_n-with-n1dots-n-and-f_n-is-of-polynomial-order" id="markdown-toc-4-maximum-entropy-of-random-variable-over-range-r-with-set-of-constraints-leftlangle-f_nxrightrangle-alpha_n-with-n1dots-n-and-f_n-is-of-polynomial-order">4. Maximum entropy of random variable over range \(R\) with set of constraints \(\left\langle f_{n}(x)\right\rangle =\alpha_{n}\) with \(n=1\dots N\) and \(f_{n}\) is of polynomial order</a></li>
</ul>
<h2 id="introduction">Introduction</h2>
<p>In this post, I derive the uniform, gaussian, exponential, and another funky probability distribution from the first principles of information theory. I originally did it for a class, but I enjoyed it and learned a lot so I am adding it here so I don’t forget about it.</p>
<p>I actually think it’s pretty magical that these common distributions just pop out when you are using the information framework. It feels so much more satisfying/intuitive than it did before.</p>
<h3 id="maximum-entropy-principle">Maximum Entropy Principle</h3>
<p>Recall that <a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)">information entropy</a> is a mathematical framework for quantifying “uncertainty.” The formula for the information entropy of a random variable is
\(H(x) = - \int p(x)\ln p(x)dx\)
.
In statistics/information theory, the <a href="https://en.wikipedia.org/wiki/Maximum_entropy_probability_distribution">maximum entropy probability distribution</a> is (you guessed it!) the distribution that, given any constraints, has maximum entropy. Given a choice of distributions, the “Principle of Maximum Entropy” tells us that the maximum entropy distribution is the best. Here’s a snippet of the idea from the <a href="https://en.wikipedia.org/wiki/Principle_of_maximum_entropy">wikipedia page</a>:</p>
<blockquote>
<p>The principle of maximum entropy states that, subject to precisely stated prior data (such as a proposition that expresses testable information), the probability distribution which best represents the current state of knowledge is the one with largest entropy.
Another way of stating this: Take precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. According to this principle, the distribution with maximal information entropy is the proper one.
…
In ordinary language, the principle of maximum entropy can be said to express a claim of epistemic modesty, or of maximum ignorance. The selected distribution is the one that makes the least claim to being informed beyond the stated prior data, that is to say the one that admits the most ignorance beyond the stated prior data.</p>
</blockquote>
<h3 id="lagrange-multipliers">Lagrange Multipliers</h3>
<p>Given the above, we can use the maximum entropy principle to derive the best probability distribution for a given use. A useful tool in doing so is the Lagrange Multiplier (<a href="https://www.khanacademy.org/math/multivariable-calculus/applications-of-multivariable-derivatives/constrained-optimization/a/lagrange-multipliers-single-constraint">Khan Acad article</a>, <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier">wikipedia</a>), which helps us maximize or minimize a function under a given set of constraints.</p>
<p>For a single variable function \(f(x)\) subject to the constraint \(g(x) = c\), the lagrangian is of the form:
\(\mathcal{L}(x,\lambda) = f(x) - \lambda(g(x)- c)\)
, which is then differentiated and set to zero to find a solution.</p>
<p>The above can then be extended to additional variables and constraints as:</p>
\[\mathcal{L}(x_{1}\dots x_{n},\lambda_{1}\dots\lambda{n}) = f(x_{1}\dots x_{n}) - \Sigma_{k=1}^{M}\lambda_{k}g_{k}(x_{1}\dots x_{n})\]
<p>and solving</p>
\[\nabla x_{1},\dots,x_{n},\lambda_{1}\dots \lambda_{M}\mathcal{L}(x_{1}\dots x_{n},\lambda_{1}\dots\lambda{n})=0\]
<p>or, equivalently, solving</p>
\[\begin{cases}
\nabla f(x)-\Sigma_{K=1}^{M}\lambda_{k}\nabla g_{k}(x)=0\\
g_{1}(x)=\dots=g_{M}(x)=0
\end{cases}\]
<p>In this case, since we are deriving probability distributions, the integral of the pdf must sum to one, and as such, every derivation will include the constraint \((\int p(x)dx-1)=0\).</p>
<p>With all that, we can begin:</p>
<h2 id="1-derivation-of-maximum-entropy-probability-distribution-with-no-other-constraints-uniform-distribution">1. Derivation of maximum entropy probability distribution with no other constraints (uniform distribution)</h2>
<p>First, we solve for the case where the only constraint is that the distribution is a pdf, which we will see is the uniform distribution. To maximize entropy, we want to minimize the following function:</p>
\[J(p)=\int_{a}^{b} p(x)\ln p(x)dx-\lambda_{0}\left(\int_{a}^{b} p(x)dx-1\right)\]
<p>. Taking the derivative with respect ot \(p(x)\) and setting to zero,</p>
\[\frac{\delta J}{\delta p(x)}=1+\ln p(x)-\lambda_{0}=0\]
\[\ln p(x)=1-\lambda_{0}\]
\[p(x)=e^{1 -\lambda_{0}}\]
<p>, which in turn must satisfy</p>
\[\int_{a}^{b} p(x)dx=1=\int_{a}^{b} e^{-\lambda_{0}+1}dx\]
<p>Note: To check if this is a minimum (which would maximize entropy given the
way the equation was set up), we also need to see if the second
derivative with respect to \(p(x)\) is positive here or not, which it
clearly always is:</p>
\[\frac{\delta J}{\delta p(x)^{2}dx}=\frac{1}{p(x)}\]
<h3 id="satisfy--constraint">Satisfy constraint</h3>
\[\int_{a}^{b} p(x)dx=\int_{a}^{b} e^{1 -\lambda_{0}}dx=1\]
\[\int_{a}^{b} e^{-\lambda_{0}+1}dx=1\]
\[e^{-\lambda_{0}+1} \int_{a}^{b} dx=1\]
\[e^{-\lambda_{0}+1} (b-a) = 1\]
\[e^{-\lambda_{0}+1} = \frac{1}{b-a}\]
\[-\lambda_{0}+1 = \ln\frac{1}{b-a}\]
\[\lambda_{0} = 1 -\ln \frac{1}{b-a}\]
<h3 id="putting-together">Putting Together</h3>
<p>Plugging the constraint \(\lambda_{0} = 1 -\ln \frac{1}{b-a}\) into the pdf \(p(x)=e^{1 -\lambda_{0}}\), we have:</p>
\[p(x)=e^{1 -\lambda_{0}}\]
\[p(x)=e^{1 -(1 -\ln \frac{1}{b-a})}\]
\[p(x)=e^{1 -1 + \ln \frac{1}{b-a}}\]
\[p(x)=e^{\ln \frac{1}{b-a}}\]
\[p(x)=\frac{1}{b-a}\]
<p>. Of course, this is only defined in the range between \(a\) and \(b\), however, so the final function is:</p>
\[p(x)=\begin{cases}
\frac{1}{b-a} & a\leq x \leq b\\
0 & \text{otherwise}
\end{cases}\]
<h2 id="2-derivation-of-maximum-entropy-probability-distribution-for-given-fixed-mean-mu-and-variance-sigma2-gaussian-distribution">2. Derivation of maximum entropy probability distribution for given fixed mean \(\mu\) and variance \(\sigma^{2}\) (gaussian distribution)</h2>
<p>Now, for the case when we have a specified mean and variance, which we will see is the gaussian distribution. To maximize entropy, we want to minimize the following function:</p>
\[J(p)=\int p(x)\ln p(x)dx-\lambda_{0}\left(\int p(x)dx-1\right)-\lambda_{1}\left(\int p(x)(x-\mu)^{2}dx-\sigma^{2}\right)\]
<p>, where the first constraint is the definition of pdf and the second is the definition of the variance (which also gives us the mean for free). Taking the derivative with respect ot p(x) and setting to zero,</p>
\[\frac{\delta J}{\delta p(x)}=1+\ln p(x)-\lambda_{0}-\lambda_{1}(x-\mu)^{2}=0\]
\[\ln p(x)=1-\lambda_{0}-\lambda_{1}(x-\mu)^{2}\]
\[p(x)=e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}\]
<p>, which in turn must satisfy</p>
\[\int p(x)dx=1=\int e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}dx\]
<p>and</p>
\[\int p(x)(x-\mu)^{2}dx=\sigma^{2}=\int e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}(x-\mu)^{2}dx\]
<p>Again, \(\frac{\delta J}{\delta p(x)^{2}dx}=\frac{1}{p(x)}\) is always positive, so our solution will be minimum.</p>
<h3 id="satisfy-first-constraint">Satisfy first constraint</h3>
\[\int p(x)dx=1=\int e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}dx\]
\[1=\int e^{-\lambda_{0}+1-\lambda_{1}z^{2}}dz\]
\[1=\int e^{-\lambda_{0}+1-\lambda_{1}z^{2}}dz\]
<p>\(1=\int e^{-\lambda_{0}+1}*e^{-\lambda_{1}z^{2}}dz\)
\(1=e^{-\lambda_{0}+1}\int e^{-\lambda_{1}z^{2}}dz\)
\(e^{\lambda_{0}-1}=\int e^{-\lambda_{1}z^{2}}dz\)
\(e^{\lambda_{0}-1}=\int e^{-\lambda_{1}z^{2}}dz\)
\(e^{\lambda_{0}-1}=\sqrt{\frac{\pi}{\lambda_{1}}}\)</p>
<h3 id="satisfy-second-constraint">Satisfy second constraint</h3>
\[\int p(x)(x-\mu)^{2}dx=\sigma^{2}=\int e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}(x-\mu)^{2}dx\]
\[\sigma^{2}=\int e^{-\lambda_{0}+1-\lambda_{1}(x-\mu)^{2}}(x-\mu)^{2}dx\]
\[\sigma^{2}=\int e^{-\lambda_{0}-1-\lambda_{1}z^{2}}z^{2}dz\]
\[\sigma^{2}e^{\lambda_{0}-1}=\int e^{-\lambda_{1}z^{2}}z^{2}dz\]
\[\sigma^{2}e^{\lambda_{0}-1}=\frac{1}{2}\sqrt{\frac{\pi}{\lambda_{1}^{3}}}\]
\[\sigma^{2}e^{\lambda_{0}-1}=\frac{1}{2\lambda_{1}}\sqrt{\frac{\pi}{\lambda_{1}}}\]
\[2\lambda_{1}\sigma^{2}e^{\lambda_{0}-1}=\sqrt{\frac{\pi}{\lambda_{1}}}\]
<h3 id="putting-together-1">Putting together</h3>
\[\sqrt{\frac{\pi}{\lambda_{1}}}=e^{\lambda_{0}-1}=2\lambda_{1}\sigma^{2}e^{\lambda_{0}-1}\]
<p>so</p>
\[e^{\lambda_{0}-1}=2\lambda_{1}\sigma^{2}e^{\lambda_{0}-1}\]
\[1=2\lambda_{1}\sigma^{2}\]
\[\lambda_{1}=\frac{1}{2\sigma^{2}}\]
<p>. Plugging in for the other lambda,</p>
\[\sqrt{\frac{\pi}{\lambda_{1}}}=e^{\lambda_{0}-1}\]
\[\sqrt{2\sigma^{2}\pi}=e^{\lambda_{0}-1}\]
\[\ln\sqrt{2\sigma^{2}\pi}=\lambda_{0}-1\]
\[\lambda_{0}=\ln\sqrt{2\sigma^{2}\pi}+1\]
<p>Now, we plug back into the first equation</p>
\[p(x)=e^{-\lambda_{0}-1-\lambda_{1}(x-\mu)^{2}}\]
\[=e^{-\ln\sqrt{2\sigma^{2}\pi}-\frac{1}{2\sigma^{2}}(x-\mu)^{2}}\]
\[=e^{-\ln\sqrt{2\sigma^{2}\pi}}e^{-\frac{1}{2\sigma^{2}}(x-\mu)^{2}}\]
\[=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(x-\mu)^{2}}{2\sigma^{2}}}\]
<p>which we can note is, by definition, the pdf of the Gaussian!</p>
<h2 id="3-derivation-of-maximum-entropy-probability-distribution-of-half-bounded-random-variable-with-fixed-mean-barr-exponential-distribution">3. Derivation of maximum entropy probability distribution of half-bounded random variable with fixed mean \(\bar{r}\) (exponential distribution)</h2>
<p>Now, constrain on a fixed mean, but no fixed variance, which we will see is the exponential distribution. To maximize entropy, we want to minimize the following function:</p>
\[J(p)=\int p(x)\ln p(x)dx-\lambda_{0}\left(\int_{0}^{\infty}p(x)dx-1\right)-\lambda\left(\int_{0}^{\infty}x*p(x)dx-\bar{r}\right)\]
<p>Now take derivative</p>
\[\frac{\delta J}{\delta p(x)dx}=1+\ln p(x)-\lambda_{0}-\lambda_{1}x\]
<p>To check if this is a minimum of the function, we need to see if the
second derivative is positive with respect to p(x), which it is:</p>
<p>\(\frac{\delta J}{\delta p(x)^{2}dx}=\frac{1}{p(x)}\) Setting the first
derivative to zero, we have</p>
\[0=1+\ln p(x)-\lambda_{0}-\lambda_{1}x\]
\[p(x)=e^{-\lambda_{0}+1+-\lambda x}\]
<p>, which must satisfy the constaints \(\int_{0}^{\infty}p(x)dx=1\) and
\(\int_{0}^{\infty}x*p(x)dx-\bar{r}\).</p>
<h3 id="satisfying-first-constraint">Satisfying first constraint</h3>
\[\int_{0}^{\infty}p(x)dx=1\]
\[\int_{0}^{\infty}e^{-\lambda_{0}+1-\lambda_{1}x}dx=1\]
\[\int_{0}^{\infty}e^{-\lambda_{1}x}dx=e^{\lambda_{0}-1}\]
\[\frac{1}{\lambda_{1}}=e^{\lambda_{0}+1}\]
\[\lambda_{1}=e^{-\lambda_{0}+1}\]
<h3 id="satisfying-the-second-constraint">Satisfying the second constraint</h3>
\[\int_{0}^{\infty}x*e^{-\lambda_{0}+1-\lambda_{1}x}dx=\bar{r}\]
\[\int_{0}^{\infty}x*e^{-\lambda_{0}+1}e^{\lambda_{1}x}dx=\bar{r}\]
<p>substituting in \(\lambda_{1}=e^{-\lambda_{0}+1}\) from above</p>
\[\int_{0}^{\infty}x*\lambda_{1}e^{\lambda_{1}x}dx=\bar{r}\]
<h3 id="putting-together-2">Putting together</h3>
<p>Rather than evaluating this last integral above, we can simply stop and
note that in evaluating our constraints we have stumbled upon the
formula for an exponential random variable with parameter \(\lambda\)!</p>
<p>More explicitly:</p>
\[\int_{0}^{\infty}x*\lambda_{1}e^{\lambda_{1}x}dx=\bar{r}\]
\[\int_{0}^{\infty}x*p(x)dx=\bar{r}\]
<p>where \(p(x)=\lambda e^{\lambda x}\), the pdf of the exponential function
for \(x\ge0\), where \(\lambda=\frac{1}{\bar{r}}\).</p>
<p>In other words,</p>
\[p(x)=\begin{cases}
\frac{1}{\bar{r}}e^{-\frac{x}{\bar{r}}} & x\ge0\\
0 & x<0
\end{cases}\]
<h2 id="4-maximum-entropy-of-random-variable-over-range-r-with-set-of-constraints-leftlangle-f_nxrightrangle-alpha_n-with-n1dots-n-and-f_n-is-of-polynomial-order">4. Maximum entropy of random variable over range \(R\) with set of constraints \(\left\langle f_{n}(x)\right\rangle =\alpha_{n}\) with \(n=1\dots N\) and \(f_{n}\) is of polynomial order</h2>
<p>\(f_{n}\) must be even order for all enforced constraints.</p>
<p>Following the same approach as above:</p>
\[J(p)=-\int p(x)\ln p(x)dx+\lambda_{0}\left(\int p(x)dx-1\right)+\Sigma_{i=1}^{N}\lambda_{i}\left(p(x)f_{i}(x)dx-a_{i}\right)\]
\[\frac{\delta J}{\delta p(x)dx}=-1-\ln p(x)+\lambda_{0}+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)\]
\[0=-1-\ln p(x)+\lambda_{0}+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)\]
\[p(x)=e^{\lambda_{0}-1+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)}\]
<p>all where \(f_{i}(x)=\Sigma_{j=1}^{M}b_{j}x^{j}\).</p>
<p>We now consider the conditions in which the random variable can be
defined in the entire domain \((-\infty,\infty)\). Looking at the
normalization constraint,</p>
\[\int p(x)dx=\int e^{\lambda_{0}-1+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)}dx=1\]
<p>we note that we need our exponential function to integrate to 1. In
order for this equation to be defined in the entire real domain, we thus
will need the exponential function to integrate to a finite value, so
that we can provide a normalization constant that will result in
integration to 1.</p>
<p>Looking at the function
\(e^{\lambda_{0}-1+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)}\) (which must
remain finite for all x), we can thus conclude that
\(\lambda_{0}-1+\Sigma_{i=1}^{N}\lambda_{i}f_{i}(x)\) must not converge to
positive infinity, but may converge to negative infinity (because it
would cause the exponential to converge to zero) or to any finite value
as \(x\) approaches positive or negative infinity. The only components of
this function that depend on \(x\) are the polynomail constraints of form
\(f_{i}(x)=\Sigma_{j=1}^{M}b_{j}x^{j}\). As such, these constraints are
the only components at risk to force the function towards infinity,
provided that \(\lambda_{0}\neq\infty.\) Therefore, because the
\(\lambda_{i}\) corresponding to can any \(f_{i}\) can be positive or
negative, the function will be able to be defined so long
\(f_{i}(x)=\Sigma_{j=1}^{M}b_{j}x^{j}<\infty\) for all \(x\), or
\(f_{i}(x)=\Sigma_{j=1}^{M}b_{j}x^{j}>-\infty\) for all \(x.\)</p>
<p>Finally, we can consider the conditions for which these criteria for
\(f_{i}\) will be satisfied. In short, the only way to guarantee that
\(f_{i}\) remain either positive for negative will be if the dominant
component of the polynomial \(f_{i}\) is of an EVEN order for all \(i\) s.t.
\(\lambda_{i}\neq0\). If the dominant component is odd, then \(f_{i}\) will
either move from negative infinity to positive infinity (or, if negated,
from positive infinity to negative infinity) as x moves across the
domain, which means that no finite and nonzero \(\lambda_{i}\) could be
chosen to maintain the criteria outlined above.</p>
Thu, 16 Mar 2017 06:00:00 -0400
http://sgfin.github.io/2017/03/16/Deriving-probability-distributions-using-the-Principle-of-Maximum-Entropy/
http://sgfin.github.io/2017/03/16/Deriving-probability-distributions-using-the-Principle-of-Maximum-Entropy/Deriving the information entropy of the multivariate gaussian<ul id="markdown-toc">
<li><a href="#introduction-and-trace-tricks" id="markdown-toc-introduction-and-trace-tricks">Introduction and Trace Tricks</a></li>
<li><a href="#derivation" id="markdown-toc-derivation">Derivation</a> <ul>
<li><a href="#setup" id="markdown-toc-setup">Setup</a></li>
<li><a href="#first-term" id="markdown-toc-first-term">First term</a></li>
<li><a href="#second-term--trace-trick-coming" id="markdown-toc-second-term--trace-trick-coming">Second term (Trace Trick Coming!)</a></li>
<li><a href="#recombining-the-terms" id="markdown-toc-recombining-the-terms">Recombining the terms</a></li>
</ul>
</li>
</ul>
<h2 id="introduction-and-trace-tricks">Introduction and Trace Tricks</h2>
<p>The pdf of a <a href="https://en.wikipedia.org/wiki/Multivariate_normal_distribution">multivariate gaussian</a> is as follows:</p>
\[p(x) = \frac{1}{(\sqrt{2\pi})^{N}\sqrt{\det\Sigma}}e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\]
<p>, where</p>
\[\Sigma_{i,j} = E[(x_{i} - \mu_{i})(x_{j} - \mu_{j})]\]
<p>is the <a href="https://en.wikipedia.org/wiki/Covariance_matrix">covariance matrix</a>, which can be expressed in vector notation as</p>
\[\Sigma = E[(X-E[X])(X-E[X])^{T}] = \int p(x)(x-\mu)(x-\mu)^{T}dx\]
<p>. I might make the derivation of this formula its own post at some point, but it is in Strang’s intro to linear algebra text so I will hold off. Instead, this post derives the <em>entropy</em> of the multivariate gaussian, which is equal to:</p>
\[H=\frac{N}{2}\ln\left(2\pi e\right)+\frac{1}{2}\ln\det C\]
<p>Part of the reason why I do this is because the second part of the derivation involves a “trace trick” that I want to remember how to use for the future. The key to the “trace trick” is to recognize that a matrix (slash set of multiplied matrices) is 1x1, and that the value of any such matrix is, by definition, equal to its trace. This then allows you to invoke the quasi-commutative property of the trace:</p>
\[\text{tr}(UVW)=\text{tr}(WUV)\]
<p>to push around the matrices however you desire until they become something tidy/useful. The whole thing feels rather devious to me, personally.</p>
<h2 id="derivation">Derivation</h2>
<h3 id="setup">Setup</h3>
<p>Beginning with the definition of entropy</p>
\[H(x)=-\int p(x)*\ln p(x)dx\]
<p>substituting in the probability function for the multivariate gaussian
in only its second occurence in the formula,</p>
\[H(x)=-\int p(x)*\ln\left(\frac{1}{(\sqrt{2\pi})^{N}\sqrt{\det\Sigma}}e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\right)dx\]
\[=-\int p(x)*\ln\left(\frac{1}{(\sqrt{2\pi})^{N}\sqrt{\det\Sigma}}e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\right)dx\]
\[=-\int p(x)*\ln\left(\frac{1}{(\sqrt{2\pi})^{N}\sqrt{\det\Sigma}}\right)dx-\int p(x)\ln\left(e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\right)dx\]
<p>We will now consider these two terms separately.</p>
<h3 id="first-term">First term</h3>
<p>First, we concern ourselves with the first ln term:</p>
\[-\int p(x)*\ln\left(\frac{1}{(\sqrt{2\pi})^{N}\sqrt{\det C}}\right)dx\]
\[=\int p(x)*\ln\left((\sqrt{2\pi})^{N}\sqrt{\det C}\right)\]
<p>since all the terms other than \(p(x)\) form a constant,</p>
\[=\left(\ln\left((\sqrt{2\pi})^{N}\sqrt{\det C}\right)\right)\int p(x)\]
<p>and because \(p(x)\) is a PDF, it integrates to 1. Thus, this component of
the equation is</p>
\[\ln\left((\sqrt{2\pi})^{N}\sqrt{\det C}\right)\]
\[=\frac{N}{2}\ln\left(2\pi\right)+\frac{1}{2}\ln\det C\]
<h3 id="second-term--trace-trick-coming">Second term (Trace Trick Coming!)</h3>
<p>Now we consider the second ln term</p>
\[-\int p(x)\ln\left(e^{-\frac{1}{2}(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\right)dx\]
\[=\int p(x)\frac{1}{2}\ln\left(e^{(x-\mu)^{T}\Sigma^{-1}(x-\mu)}\right)dx\]
<p>because \((x-\mu)^{T}\) is a 1 x N matrix, \(\Sigma^{-1}\) is a N x N
matrix, and \((x-\mu)\) is a N x 1 matrix, the matrix product
\((x-\mu)^{T}\Sigma^{-1}(x-\mu)\) is a 1 x 1 matrix. Further, because the
trace of any 1 x 1 matrix
\(\text{tr}(A)=\Sigma_{i=1}^{n}A_{i,i}=A_{1,1}=A\), we can conclude that
the 1 x 1 matrix
\((x-\mu)^{T}\Sigma^{-1}(x-\mu)=\text{tr}((x-\mu)^{T}\Sigma^{-1}(x-\mu))\).</p>
<p>As such, our term becomes</p>
\[=\int p(x)\frac{1}{2}\ln\left(e^{\text{tr}\left[(x-\mu)^{T}\Sigma^{-1}(x-\mu)\right]}\right)dx\]
\[=\frac{1}{2}\int p(x)\ln\left(e^{\text{tr}\left[(x-\mu)^{T}\Sigma^{-1}(x-\mu)\right]}\right)dx\]
<p>, which, by the quasi-commutativity property of the trace function,
\(\text{tr}(UVW)=\text{tr}(WUV)\),</p>
\[=\frac{1}{2}\int p(x)\ln\left(e^{\text{tr}\left[\Sigma^{-1}(x-\mu)(x-\mu)^{T}\right]}\right)dx\]
<p>. Because \(p(x)\) is a scalar and the natural logarithm and exponentials
may cancel, the properties of the trace function allow us to push the
\(p(x)\) and the integral inside of the trace, so</p>
\[=\frac{1}{2}\int\ln\left(e^{\text{tr}\left[\Sigma^{-1}p(x)(x-\mu)(x-\mu)^{T}\right]}\right)dx\]
\[=\frac{1}{2}\ln\left(e^{\text{tr}\left[\Sigma^{-1}\int p(x)(x-\mu)(x-\mu)^{T}dx\right]}\right)\]
<p>But, $\int p(x)(x-\mu)(x-\mu)^{T}dx=\Sigma$ is just the definition of the covariance matrix! As such,</p>
\[=\frac{1}{2}\ln\left(e^{\text{tr}\left[\Sigma^{-1}\Sigma\right]}\right)\]
\[=\frac{1}{2}\ln\left(e^{\text{tr}\left[I_{N}\right]}\right)\]
\[=\frac{1}{2}\ln\left(e^{N}\right)\]
\[=\frac{N}{2}\ln\left(e\right)\]
<h3 id="recombining-the-terms">Recombining the terms</h3>
<p>Bringing the above terms back together, we have</p>
\[H(x)=\frac{N}{2}\ln\left(2\pi\right)+\frac{1}{2}\ln\det C+\frac{N}{2}\ln\left(e\right)\]
\[=\frac{N}{2}\ln\left(2\pi e\right)+\frac{1}{2}\ln\det C\]
<p>as desired.</p>
Sat, 11 Mar 2017 05:00:00 -0500
http://sgfin.github.io/2017/03/11/Deriving-the-information-entropy-of-the-multivariate-gaussian/
http://sgfin.github.io/2017/03/11/Deriving-the-information-entropy-of-the-multivariate-gaussian/