International Society for History, Philosophy, and Social Studies of Biology


twitter 2015
     facebook 2015

Program

TUESDAY, JULY 7  /  15:30 - 17:00  /  DS-M460
Organized session / standard talks
Tying explanations to models in neuroscience
Organizer(s):

Eric Hochstein (Washington University in St Louis, United States)

Advances in computer simulations and modeling techniques over the past few decades have provided new tools and avenues of research for neuroscientists to use in the study of the brain (Thagard 1993; Winsberg 1999). These new modeling techniques have brought with them new ways of interpreting and thinking about what the brain is, and what it’s doing. This, in turn, has led many to conclude that these new ways of modeling the brain provide new types of scientific explanations, unlike those that came before. These include mechanistic explanations (Craver 2007; Bechtel 2008; Eliasmith 2010), computational explanations (Chirimuuta 2014), dynamical explanations (Van Gelder & Port 1995; Kelso 1995; Chemero 2009), and topological explanations (Huneman 2010). This proliferation of different types of models has resulted in an ongoing debate in philosophy of science regarding the exact relationship between scientific models and scientific explanations within neuroscience. Do different types of models necessarily support different kinds of explanations? If so, on what grounds? Can the same model play a role in fundamentally different types of explanations? Can different kinds of models be part of the same type of explanation? While some stress the epistemic differences between these models and how they impact their explanatory power, others have argued that many of these accounts are not truly distinct and can be integrated into the same broadly mechanistic framework (Piccinini & Craver 2011; Kaplan 2011; Eliasmith 2012). The purpose of this symposium is to further explore this relationship between different types of scientific models and the nature of scientific explanations within a neuroscientific context.


Why no model is ever a mechanistic explanation

Eric Hochstein (Washington University in St Louis, United States)

There is a great deal of disagreement within the philosophy of neuroscience regarding which sorts of models provide mechanistic explanations, and which do not (e.g: computational models, dynamical models, network models, etc). These debates often hinge on two commonly adopted, but conflicting, ways of understanding what mechanistic explanations are: what I call the “representation-as” view, and the “representation-of” view. In this paper, I argue that neither account can be used to adequately make sense of neuroscientific practice. In their place, I offer a new alternative that can diffuse some of these disagreements. I propose that individual models never provide mechanistic explanations (regardless of what type of model they are). Mechanistic explanations by necessity span sets of different types of models. To claim that a given model provides us with a mechanistic explanation is in fact to claim that this model, in conjunction with all other pre-existing models and background information we have about the system, allows us to decompose the system into parts and operations for better understanding. Thus the same model counts as both a mechanistic, and a non-mechanistic explanation, depending entirely on what other background information we have available at the time, and whether the model allows us to fill in informational gaps about some mechanism operating in the world.


Perspectives on the brain

Mazviita Chirimuuta (University of Pittsburgh, United States)

As has frequently been noted, cognitive neuroscience in the 21st century has been transformed by a proliferation of new technologies for conducting experiments on the brain. These include the numerous modalities of neuroimaging (functional and anatomical, invasive and non-invasive); optogenetics and brain computer interfaces (BCI’s). Some philosophers of neuroscience have argued that contemporary neuroscience is also characterized by explanatory pluralism: that different experimental and theoretical research traditions are associated with distinct explanatory styles, such as mechanistic, computational and dynamical ones (Silberstein and Chemero 2013; Chirimuuta 2013, 2014; Ross 2015). The focus of this talk is the relationship between these various research traditions. I first propose that different experimental techniques offer unique and complementary perspectives on neural function, whereby the theories and models associated with one stream of research offer insights not afforded by the others, but at the same time cannot be said to contradict the alternative perspectives (Giere 2006). I discuss the different experimental techniques that are most readily associated with mechanistic vs. dynamical models. I then argue that there is a Levins (1966) style trade-off between the “realism” afforded by mechanistic models and the “generality” made possible by dynamical models.


Graphing the brain's dark energy: Network analysis in the search for neural mechanisms

Carl F. Craver (Washington University in St. Louis, United States)

In a recent paper on network analysis, Philipe Hunneman conjectures that topological explanations represent a style of explanation distinct from mechanistic explanation. I discuss these claims in the context of recent work using resting state functional correlations to infer cortical structure. Graph theory and network analysis are central research tools in the Human Connectome Project (Sporns 2011). I argue (in agreement with Hunneman) that that network models (often coupled to facts about localization) can be used to describe features of the organization of complex mechanisms that other representational systems are ill-equipped to describe. I catalogue some of the surprising uses of network theory in contemporary connectomics. Network theory can be used to construct accurate, complete, and well-verified mathematical descriptions of both brain activity and brain structure without thereby explaining how brains do anything at all. It would distort contemporary network theory in the connectome project to see it as aiming at a style of explanation distinct from mechanistic explanation. The project is aimed at providing useful information about mechanisms, and the graphical models are a surprisingly indirect means of accessing and representing such information.