International Society for History, Philosophy, and Social Studies of Biology


twitter 2015
     facebook 2015

Program

MONDAY, JULY 6  /  11:00 - 12:30  /  DS-R525
Organized session / standard talks
Modeling in systems biology: Simplicity versus completeness
Organizer(s):

Sara Green (University of Copenhagen, Denmark); Melinda Fagan (University of Utah, United States)

All things considered, it would seem obvious that a mathematical model of a biological system that is more complete in terms characterizing the parts and operations of a mechanism would be preferable to one that simplifies. After all, the components and processes that are left out in the simplified model may make a difference to the behavior of the mechanism that cannot be detected except through use of a complete model. While systems biologists do employ models that attempt to be as complete as possible to determine whether the hypothesized mechanism will behave as anticipated, when the focus is on explanation they often prefer simpler models. This symposium will examine specific contexts in which the question of completeness versus simplicity has arisen in systems biology and examine the reasons the modelers advance for their strategy and the understanding of phenomena they offer.

Ingo Brigandt will set out the key issue for the symposium of what the complete modeling of a phenomenon involves, and contrast this with different types of mathematical models currently being deployed in systems biology and the particular explanatory insight each provides. William Bechtel will then examine a specific case involving circadian rhythms in cyanobacteria in which, in order to understand how such rhythms were produced, the researchers developed a simplified model and searched for parameters that would generate sustained oscillations in the model. Finally, Sara Green will consider cases in which investigators are aiming to generate complete models, but focus on the critics and the shortcomings they identify in the pursuit of such models.


Different types of explanatory mathematical analysis in systems biology

Ingo Brigandt (University of Alberta, Canada)

A mechanistic explanation cites the components of a mechanism, including the entities, activities, and their organizational features that underlie the phenomenon to be explained. But in addition to mentioning relevant components, the explanation also has to lay out how the operation of the mechanism generates the phenomenon of interest. This explanatory understanding often comes from mentally simulating the behavior of the components and overall mechanism, facilitated by a mechanism diagram. But complex mechanisms cannot be mentally simulated. In contrast, systems biology studies such mechanisms using different types of mathematical models. The result of a computer simulation may entail that the mathematical model does in fact produce the phenomenon of interest, but this does not provide the explanatory understanding afforded by mental simulation (of how the phenomenon is produced). Moreover, several types of mathematical analysis used in systems biology do not explain by reproducing the complete behavior of the mechanism, but instead by analyzing certain aspects of the system. The upshot of such a mathematical analysis is often visualized in graphs, which provide explanatory understanding without being mechanism diagrams. Such explanatory analysis can also require reference to parameter values not found in the actual mechanism in nature. In this talk, I lay out and compare a few of the types of mathematical analysis found in systems biology, including steady state analysis, bifurcation analysis, and stability analysis. I discuss in what way and in what epistemic context they have explanatory import.


Discovering design with simplified computational models

William Bechtel (University of California, San Diego, United States)

Although some computational models in systems biology aim to characterize completely the mechanism responsible for a phenomenon, another important use of computational modeling is to determine basic design principles that enable a mechanism to exhibit the phenomenon. This is often best achieved by focusing on simplified models that abstract from many known components and operations to determine which are essential. This paper will illustrate this practice by focusing on the circadian clock in cyanobacteria. Unlike circadian clocks in other organisms that rely on feedback loops involving gene expression, the cyanobacterial clock employs just ATP and three proteins, one of which is cyclically phosphorylated and dephosphorylated. In a 2012 paper Jolley, Ode, and Ueda used this as a basis for investigating through computational modeling a basic design that would suffice for a biochemical circadian oscillator. They found that a kinase and a phosphatase operating at just one phosphorylation site will converge to a steady state, but with two sites, using some parameter values, sustained oscillations are possible. By sampling a large numbers of parameter sets they identified approximately 1,000,000 (~0.1% of those tested) that sustained oscillations. These fell into two distinct clusters, each of which realizes a design motif (one that forces an ordered sequence of phosphorylation states and another that generates checkpoints by enzyme sequestering). From this foundation, the researchers went on to address questions about how the motifs made the resulting oscillators robust and synchronized activity of enzymes of the same type. By constructing a model of a minimal network and searching for parameters that sufficed to generate circadian oscillations, this research exemplifies a strategy for discovering design principles that explain the behavior of empirically identified mechanisms by developing highly simplified modes.


Large-scale modeling and the ideal of completeness

Sara Green (University of Copenhagen, Denmark)

This paper examines the prospects for and challenges to large-scale modelling in systems biology. The dream of complete models of living systems has recently been turned into serious research projects, such as whole-cell projects (the Silicon Cell, E-cell), the Virtual Physiological Human and the Human Brain Project. Rather than drawing on problematic idealizations for the sake of simplicity, the hope is to create as complete representations as possible of biological systems. Complete should here be understood in terms of mathematical descriptions that are maximally fitted to experimental parameter values for as many causal processes as possible. If successful, such simulations will not only allow researchers to mimic the behavior of biological systems in silico but also to observe effects of interventions on the model, and thereby to access in simulations what cannot be obtained through experimental research due to practical challenges or ethical reasons. The expected outcomes of such projects for biological and biomedical research, and for the development of personalized medicine, are tantalizing. Many proponents envision that developing such models will revolutionize biomedical research and heath practices. But others remain skeptical that the vast amount of information can be turned into clinically useful information through large-scale integration of data and models. In this paper I analyze the methodological and theoretical challenges that give rise to such controversies. In particular, I focus on the problem of integrating different types of models (ODEs, partial differential equations, agent based simulations etc.) and different types of data, conducted at different scales of biological organization and in different contexts. In addition, I address the more fundamental concern that large-scale modeling aiming for completeness merely reproduces biological complexity.