International Society for History, Philosophy, and Social Studies of Biology


twitter 2015
     facebook 2015

Program

THURSDAY, JULY 9  /  15:30 - 17:00  /  DS-1540
Organized session / standard talks
Failure in science
Organizer(s):

Ann-Sophie Barwich (Center for Science and Society, Columbia University, United States)

Science fails. And it fails on a daily basis. Experiments go wrong, measurements do not deliver anticipated results, probes are contaminated, models are considered too simplistic and not representative, and some inappropriately applied techniques lead to false positives. When philosophical debate has dealt with scientific failures, it predominantly focussed on justifying the success of the scientific enterprise in terms of its capacity to represent reality. An often-overlooked characteristic of science—not at least in the quest for more grants and media suited success stories—is that it inevitably must fail to do the job it sets out to do. For scientific research to exceed our initial modelling assumptions and to continuously trump our ever-adjusting experimental limits, things have to go wrong. We want to investigate different aspects of failure as integral to science. Our interest in failure refers to more than the obscure ‘element of surprise’ or an incentive to do better next time. We think that failures in science are beneficial in their own right. Not only because failures might lead to accidental findings, or because they correct prior assumptions in an experimental set-up. Rather, we think failures guide scientific enquiry on a par with success stories. Failure complements success more than in some proverbial sense: While a success enforces a current research strategy, failure opens up many different alternatives routes broadening our inquiry. By asking why something appears to present a failure, different possibilities are conceived. Each of these possibilities can be investigated by designing constraints under which different features and behaviours of research materials are modelled and simulated. Failure requires creative and flexible reasoning that exceeds a given modelling outset. The importance of failure in science lies in its demand to rethink the constraints that underlie our models and, moreover, our current successes.


Explicit failure

Stuart Firestein (Columbia University, United States)

Public support (i.e., money) for science is critical, but we find a public ever more excluded from the scientific process, left with second hand newspaper accounts of a stream of discoveries. This is further distorted by an educational program that presents science as an impenetrable mountain of facts often presented in an equally impenetrable language. The problem is not significantly remedied by presenting the public with outreach programs of “science for dummies” lectures by leading scientists. Although these lectures, when done well, are often entertaining and provide the public with a slightly friendlier view of science, the problem in the end is not one of information but of attitude. Scientists know implicitly that it is questions that count more than facts and that failure is an integral part of the scientific process. Certainty is rare and because of its common association with authority is usually suspect. Thus science is most creative because it traffics in ignorance, failure, doubt and uncertainty. Remarkably these add up to a 400 year record of success and progress unparalleled in human history. Why is it so difficult to make this implicit scientific method of failure and ignorance, the real scientific method, explicit to the non-professional scientist? Are failure and ignorance the possession of an elite corps of PhD scientists? Indeed are they even the possession of many working scientists? Do pressures of funding and promotion steer most scientists away from risky questions that have a high potential for failure, pushing them instead to adopt safe if less interesting research programs. With a low tolerance for failure and uncertainty we promote a science that lacks courage and patience and sacrifices creativity for the mundane. Can this be rectified? Galileo changed the attitude of both church and public towards science with a single book.


Intrinsic hidden constraints in data-intensive biology

Isabella Sarto-Jackson (Konrad Lorenz Institute for Evolution and Cognition Research, Austria)

Biology is progressively changing into a data intensive science. Advocates of big data are thrilled about the prospects that the limitations that previously forced researchers into hypothesis-driven research can now be overcome by using data-driven approaches. Concomitantly, these experts urge that science has to get past its historically rooted obsession with causal explanations in favor of revealing relationships of data by correlation methods. They claim that knowing what, not how, is good enough for answering scientific questions. And in fact, data intensive success stories have opened new empirical avenues in biology, such as designing diagnostic tools and developing high-throughput technologies. These advances have proven particularly valuable in applied sciences. However, the unwaning enthusiasm for a successful strategy often comes at the expense of overlooking hidden constraints. Scientific observation is perspectival, the nature of instruments, the methods, and the data measured reflect only a selected aspect of reality through which researchers interact with the word. The vast scale notwithstanding, scientists still apply well-established, standard scientific methods to get from the actual data to models. Thus the practice remains internal to the scientific perspective. This is irrespective of the underlying data volume and analyzing methods. To make matters worse, the more successful (in terms of richness of answers) a strategy seems, the more negligible it seems to go beyond its borders. But this is where intrinsic constraints are concealed. In line with Firestein, I believe that basic science begins where data run out and new questions must be created. To draw (causal) conclusions going beyond the current scientific perspective, researchers have to move to a broader theoretical perspective by temporarily neglecting facts and correlations in a controlled manner. I will support my claim using recent examples from big data approaches of molecular and structural biology that are used for 'rational drug design'.


"Simply" failure or delayed success? Mapping smells in the brain

Ann-Sophie Barwich (Center for Science and Society, Columbia University, United States)

What is considered a failure or a failing research strategy in scientific practice? It is often not obvious whether a lack of success is based on the characteristics of the target-system, the choice of theoretical concepts or current instrumental boundaries. I want to understand what forms of failure render the need for alternative research strategies visible by analysing when modelling limits become interpreted as failures (rather than delayed successes) of standard approaches. I address this question by looking at contemporary research mapping smells in the brain. It was long thought that the olfactory pathway works similarly to the visual system, forming topographic activation patterns in the cortex. Previous studies had successfully established a clear activation pattern in the olfactory bulb (a multilayered neural structure situated in the frontal lobe of the brain). But the apparent lack of a clear spatial organisation of activation patterns in the olfactory cortex presents one of the major research puzzles today. A recent study (Chen et al. 2014) now indicates that olfactory processing in the piriform cortex (PC) exhibits a different organisational principle. In this study, the experimental method was old but the conceoptual modelling strategy was new. Drawing on this study, I examine when research strategies—such as modelling possible cortex activations based on bulb patterns—require re-examination. What is considered a modelling constraint and what a failure? And how are previous experimental failures influencing the conceptual development of alternative questions?