Jeffrey Townsend, PhD
Cards
About
Research
Overview
1. TOOLS FOR CANCER GENETICS AND EPIDEMIOLOGY
Whole-exome sequencing has created tremendous potential for revealing the genetic basis and underlying molecular mechanisms of many forms of cancer. However, somatic mutations occur at a significant frequency within tumors of most cancer types, and identification of the mutations that are on the causative trajectory from normal tissue to cancerous tissue is challenging. We are making algorithmic advances in clustering across discrete linear sequences to enact two powerful approaches to this identification. First, we are applying maximum likelihood approaches that we have developed for model-averaged clustering in discrete linear sequences to somatic amino acid replacement mutations appearing within mutated genes. Because amino acids of proteins that are functionally important are locally clustered in domains, mutations in multiple tumors that are functionally important to the development of cancer cluster in the linear sequence of relevant genes, allowing inference of relevance and function even in cases without three-dimensional protein structure. These clustering analyses have the power to demonstrate, for instance, cross-cancer consistency in the functional importance of the DNA binding domain of tumor suppressor p53, whether in a cancer with extensive exome data (ovarian serous adenocarcinoma) or in a cancer with much less extensive exome data (e.g. rectal adenocarcinoma).
Second, we are applying evolutionary theory to the problem of identification of the genetic architecture of underlying cancer development. The path from normal to cancerous tissue is navigated by an evolutionary process. Tools from evolutionary theory have the potential to parse those mutations that are selected within cells on the path to cancer from those mutations that arise incidentally during the somatic evolution of cancer. The theory we are applying makes use of differences in expectation for synonymous and replacement mutations. Synonymous mutations are expected to have no functional impact; thus they yield a proxy expectation for the “incidental” mutations, whereas carcinogenic replacement mutations will spread within tumors more frequently and are clustered within gene sequence. Our theory also employs human population polymorphism data, which most evolutionary biologists believe can be largely assumed to be neutral. This data facilitates calibration of the probable impact of replacement changes to sequence conservation by eliminating the confounding variable of the degree of purifying selection, which decreases the number of mutations observed in some genes and allows others to accumulate many mutations with little impact.
We are extending this approach to estimating selection intensity on mutations along the trajectory toward cancer, revealing the level of selection within tumors for replacement mutations compared to synonymous mutations. This evolutionary analysis is ideal for detecting the history of selection on sites within genes during the evolution of cancer from exome sequencing data. These sites, particularly when representing gain-of-function mutations, will help identify candidate loci for pharmacological intervention. This approach will be applied to identify targets for pharmacological intervention and design “personal genomics” drugs appropriate for the genetics of individual cancers in individual patients. As a component of that project, we are constructing an “active-experiment” cancer exome database to facilitate further bioinformatics investigation of cancer exome data.
2. BIOSTATISTICAL ANALYSIS FOR NONLINEAR MATHEMATICAL MODELS OF THE EPIDEMIOLOGY OF DISEASE
I am developing probabilistic statistical methodologies for the mathematical modeling of disease emergence and spread. Robustness of models has usually been assessed by techniques that explore the relative impact and importance of parameters upon the mathematical behavior of the function and the mathematical predictions of the model. For diverse reasons including the difficulty or cost of acquisition, restrictions due to privacy, and urgency of analysis in the case of outbreaks, data for estimation of epidemiological parameters is often sparse. Evaluating a model with the “best point estimate” of sparse data may convey a misleading certitude to policy makers basing decisions on deterministic models of disease outbreak, spread, and persistence. Conversely, policy makers who are aware that models are parameterized with limited data may be dismissive of deterministic predictions that yet have significant validity. These issues may be most straightforwardly addressed by probabilistic sensitivity analysis of parameters and full uncertainty analysis of outcomes of interest. These analyses amount to accommodating the uncertainty of parameters directly into an analysis by probabilistically resampling data or likely distributions of parameters to calculate a probabilistic distribution of outcomes.
For instance, one of the most common modeling approaches for evaluating interventions is based on differential equation models of disease such as the standard Susceptible-Infected-Recovered (SIR) model. In the SIR model and other more complex constructions, a closed-form solution can often be calculated for the basic reproductive number, R0, the average number of secondary infections that would follow upon a primary infection in a naïve host population. In a population where there is preexisting immunity due to either vaccination or previous infection, the effective reproductive number, Re, is defined as the average number of secondary infections following a primary infection in a population that is not completely naïve.
is of particular interest in public health because interventions that bring its value below 1 are predicted to eradicate the disease. This deterministic threshold of is proposed as the basis for policy decisions regarding the level of interventions that should be implemented. However, the best estimates for the parameters that are needed for the closed-form solution of are inevitably inexact. To address this point, sensitivity analyses are frequently performed to evaluate models and explore the relationship between model parameters and outcomes. In such deterministic sensitivity analyses, one or more parameters are perturbed and the corresponding effects on outcomes are examined. The perturbation can be done either by evaluating the effect of arbitrarily small changes in parameter values (e.g. ± 1%) or by evaluating the effects across a range of values defined by plausible probability density functions. Because the values of other parameters are held fixed at best point estimates, these strategies do not account for interaction effects in non-linear dynamic models, and do not assess global uncertainty in outcome. Uncertainty analysis has been recommended for many fields of mathematical modeling, including medical decision making, as an optimal approach to presenting models. In the case of dynamic transmission modeling, however, authoritative best practices have not included uncertainty analyses. Modeling guidelines recommend probabilistic sensitivity analysis, in which both global parameter uncertainty and output uncertainty are addressed, as the best practice method for uncertainty analysis. Yet that ideal has not been extended to dynamic transmission models, for which its implementation has been challenging.
We are developing methods for global probabilistic sensitivity analysis that allow the contribution of each parameter to model outcomes to be investigated while also taking into account the uncertainty of other model parameters. Uncertainty in parameter values can be accounted for by sampling randomly from empirical data or from probability density functions fit to empirical data. Depending on the instance, such sampling techniques include bootstrapping, Monte Carlo sampling, and Latin hypercube sampling. The model output generated from parameter samples can then be analyzed using linear (e.g. partial correlation coefficients), monotonic (e.g. partial rank correlation coefficients) and non-monotonic statistical tests (e.g. sensitivity index) to determine the contribution of each parameter to the variation in output values. Indeed, for a global sensitivity analysis to yield probabilities associated with outcomes that are of greatest utility to policy makers, probabilistic analyses of parameter uncertainty must be carried through to the model outcomes. For example, the probability of eradication of an epidemic is sensitive to both levels of vaccination and treatment. Moreover, a policy based on the analysis of data should take into consideration not only the best estimate of necessary action, but also the uncertainty around that outcome estimate. The former policy advice, indicating an exact cline of treatment and vaccination that should put into abeyance an influenza epidemic, is very different and can be misleading compared to the probabilistic statement, which gives a policymaker a predictive probability that a particular policy of treatment and vaccination will put into abeyance an influenza epidemic. Similar approaches applied with a next-generation matrix to rabies vaccination in Tanzania were able to demonstrate that WHO goals in two districts of 70% vaccination coverage of dogs had more than enough probability to control rabies, if only the process to achieve those not impractical goals could be mustered.
A public health decision maker would find most useful the assignment of the probability of eradication to each level of treatment, so that they may precisely weigh the cost of intervention against the potential for failure. These probabilistic outcome distributions also feed forward extremely fluidly with cost-effectiveness estimation, a field which has embraced uncertainty analysis but which has until our recent work not incorporated uncertainty from nonlinear infectious disease models into calculations.
We have many projects ongoing in the lab, covering topics summarized below, including many we have already published on and many that we have not. In particular, we have a lot of projects on the somatic evolution of cancer that are not yet in publications.
Medical Subject Headings (MeSH)
Academic Achievements & Community Involvement
News & Links
Media
- Spotted DNA microarray. Red spots represent genes abundantly expressed in wine yeast growing in a high concentration of copper sulfate. Green spots represent genes expressed abundantly in wine yeast growing at a low concentration of copper sulfate. Copper sulfate is often applied in vineyards to control growth of fungi.
- Information on the Tree of Life. Recent efforts to reveal the evolutionary history of life on earth have increasingly relied on the sequencing of DNA from multiple species for multiple genes. This figure demonstrates a principle that should guide these efforts: to understand deep divergences, sample taxa that diverge deeply first. a) and b) Curves depict the cumulative support for the bold deep internode of four species (the fungi Yarrowia lipolytica, Saccharomyces cerevisiae, Coccidioides immitis, and Neurospora crassa), ranging from zero to complete sampling for several sampling schemes: the outcome based on perfect and worst-possible performance (dashed); outcome based on prioritizing sampling based on an novel theoretical prediction using rate of evolution of the sequences (solid); outcome based on prioritizing sampling of all genes for the deepest ingroup (dash-dotted); expectation for haphazard sampling (dotted). c) The established chronogram, or time tree, of the evolution of these species. Vertical bars in the plots correspond to switches from sampling characters from deeper-branching to sampling characters from shallower-branching taxa; note that the slope of the increase in cumulative information (red and green curves) declines as sequences are sampled from more recently diverged lineages in the tree, and that this pattern of high utility to sampling the deepest lineages is revealed for both the clade in panel a and the clade in panel b.
- Population genetic modeling of HGT suggests several key quantities are important to designing any sampling-based assay of horizontal gene transfer (HGT) in large populations. The HGT rate r and the exposed fraction X play significant but ultimately minor roles in the population dynamics, most likely impacting only the number of original opportunities for horizontal spread of genetic material. The malthusian selection coefficient m of the transferred genetic material and the time in recipient generations t from exposure play key, non-linear roles in determining the potential for detection of HGT. Sample size n is important, but frequently the practical sample sizes to be obtained are many orders of magnitude below the extant population size. It is therefore essential to wait until natural selection has had time to operate, so that it is essential to wait until natural selection has a chance to operate to have any chance of effectively detecting horizontal gene transfer events.
News
- August 27, 2024Source: OncLive
Dr. Townsend on the Future Analysis of Prostate Cancer Driver Mutations Using Neoplasm Tissue Samples
- June 20, 2024
Chemotherapy Before Surgery Benefits Some Patients With Pancreatic Cancer
- May 29, 2024Source: Yale News
To avoid infection spread, how long a quarantine is sufficient? It depends
- April 12, 2024
Yale Cancer Center Faculty and Trainees Present at AACR Annual Meeting