Special
Issue on
|
Questioning the Nano-Bio-Info-ConvergenceRaphaël Larrère*
1. IntroductionSome preliminary remarks shall specify the
limits of
the present paper. I have focused my attention on three fields of
biological sciences that are particularly involved in the development
of biotechnologies, but also long ‘converging’ with information
science: genetics, developmental biology, and molecular biology in a
broader sense. I have therefore left aside important fields in biology,
particularly the ones concerning physiology and ecology. I have also
limited my inquiries to the research units of the French Institute for
Agronomy Research (INRA). On the one hand, that limitation was made for
pratical reasons as the involved laboratories of INRA are near Paris.
On the other hand, my aim was to collect not representatives but
significant data. To that end, I needed to conduct thorough
examinations with researchers and to observe experiments in progress.
Moreover, I had to establish a confident relationship with the
researchers, which was favored in INRA as I became one of their
colleagues and had the opportunity to meet most of them. That is how I
came to notice that other scientific fields, not dealt with in this
paper, were particularly interested in nanoscience and nanotechnology.
A laboratory, for instance, is trying to understand some architectural
characteristics of trees and the mechanical properties of wood by
analyzing the nanostructures of wood cells. Other research units are
dealing with nanometric structures that condition the texture and the
organoleptic or nutritional properties of food (e.g., milk
casein micelles, low-density lipoproteins of eggs, oleosomes contained
in oilseeds, crystalline lamellas of starch granules). In these cases
the nanoscale approach is the result of scientific goals that have
nothing to do with any ‘convergence’ of the bio and nano fields. For
instance, analyzing the involved nanometric structures helps
understanding the qualities of a material such as wood, or how the
trees are able to keep a saturated sap column at a negative pressure
(considering their height) and prevent air from invading it when it is
wounded. The same goes with the understanding of how milk is turned
into cheese (or yogurt), or how emulsions hold. 2. Theoretical Convergence and Instrumental ConvergenceI work with the hypothesis that there are two different kinds of ‘convergence’ that are frequently interwoven, however. The first ‘convergence’ is theoretical and actually encompasses different situations. In the first one, the converging fields refer to the same model and share the same epistemic culture. That is how thermodynamics has been used as a theoretical reference in ecosystem ecology as well as in metabolic physiology and machine science, or how cybernetics has established itself in ecology and physiology as well as in information science. In the second situation, one of two fields is used as a model for the other through an import of concepts and reasoning by analogy, such as when, for instance, economy is used as a theoretical reference in sociobiology. Sometimes there is also backwards reference, when, for instance, the sociobiological interpretation of evolution inspired innovation economics. Moreover, one discipline can also refer to several different models. For instance, ecosystem ecology is a cybernetic model of the thermodynamic interpretation of ecosystems; and for characterizing the compartments of ecosystems (i.e. the functional groups), it borrows terms from economics, such as producers, primary and secondary consumers, recyclers, and productivity. The second ‘convergence’ is instrumental: technologies resulting from the development in one field are useful and tend to become essential to the research development in other fields, without there being any obvious theoretical ‘convergence’. That was for long the characteristic of mathematics. From this point of view, the calculations that are made possible by computers have led to a kind of instrumental ‘convergence’ of computing with all the sciences. But in order to really speak of ‘convergence’, scientists should not just use computing tools in their research process, but also invent their own software and design specific tools for the experiments they are conducting.[1] I will argue that convergence between biology
and information science was first theoretical before it became
instrumental. 3. From Theoretical to Instrumental ConvergenceSince the discovery of DNA and its structure, cellular mechanisms of protein synthesis have been described and understood in terms of information transfer mechanisms. This informational framework was contemporary with the development of computing. Henri Atlan (1999) suggested that information science is a ‘hard’ discipline, because it strongly influenced biologists in their interpretation of the role of genes in cellular operations. The molecular sequence of DNA in chromosomes was immediately identified with a code that would contain all the necessary information for the generation, development, and metabolism of living beings. The computer metaphor suggested a research program aimed at deciphering the code in order to understand and control the fundamental mechanisms of life. As early as 1961, Ernst Mayr (1961) announced that there is a genetic program inscribed in the nucleotide sequence of DNA and that this program provides a mechanical, non-vitalistic explanation of the development of organisms. The metaphor of the genetic ‘program’ made biologists (and along with them teachers and popularizers) postulate that ‘it is all in the gene’. The genome, the one that geneticists sequence, decipher, and manipulate (well-financed by public, private, or charity funds) is viewed as the key to the secret of life, as the identity of organisms or even, when it comes to human beings, as characteristics of their psyche and their deviances.[2] Since most research conducted in molecular biology and in genetics has been conducted under the rule of the computer metaphor, information theory has indeed presided over biotechnological innovations, including genetic manipulations (transgenesis and mutagenesis) and even, for a while, the attempts at cloning (i.e. duplicating genetic information). The goal was indeed either to provide more information to the program of an organism, or, as Axel Kahn (1996) put it, "to subjugate the program of an organism to that of another organism", or to duplicate the genotype of particularly interesting phenotypes. Molecular biology resulted from importing to
biology the theoretical model provided by information science. At
first, molecular biology only used the technologies of information
science much in the same way as other disciplines did. The computer
allowed faster calculations to achieve operations that were hitherto
beyond the researchers’ reach, to store and run huge databases, and
eventually to use a word processor. The development of bio-computing, a
specialty dedicated to research in biology, along with the
miniaturization of analytical tools such as DNA chips (or lab-on-a-chip
devices),[3]
will allow molecular biology to instrumentalize the technologies
derived from the theory on which it was based. Thus, the design of
microcomputing tools has allowed biologists to sequence genomes much
faster today than before. In a way, the ‘instrumental convergence’
reinforces the theoretical connection, first because the computing
tools being used were designed according to this connection, then
because they allow running more data in less time and thereby making
molecular biology more efficient. It can thus be stated that there
already exists an achieved and solid micro-bio-info convergence, which
might be more able to study the mechanisms of transgenesis and cloning,
and to improve its efficiency by shifting from the ‘micro’ or cellular
level to the ‘nano’ or molecular level. So the ‘nano-bio-info
convergence’ would extend from the success of the ‘micro-bio-info
convergence’. 4. A ParadoxIronically, as microcomputing was improving the efficiency of genetics and developmental biology, the latter tended to detach itself from the computing metaphor underlying the research programs in molecular biology. The use of bio-computing has not much to do with recent developments in molecular genetics that question the central dogmas resulting from the computer interpretation of how the cell functions. The simple proof that it is possible to transfer a specialized – thus differentiated – nucleus into an oocyte and to allow this oocyte to reorganize the nucleus’s genome – by way of several disruptions – so that it becomes totipotent, testifies for the existence of epigenetic mechanisms.[4] So does the phenotypical variation shown by genuine clones of fish, or the variations in the expression of transgenes according to their integration site. Moreover, by becoming more efficient thanks to bio-computing, molecular biology has been able to display far more complicated mechanisms than would have been expected considering the simplistic determinism derived from the computing metaphor. The central dogma ‘one gene, one protein, one function’ has been shattered and we now know that one gene can intervene in the synthesis of several proteins, and that one function is in most cases controlled by several genes. Bio-computing also allows to study ‘epistatic’ interactions within the genome and for instance to outline devices meant to ‘repair’ the genome when anomalies occur during the DNA duplication.[5] It is also possible to study the interactions between the genome and its cellular environment, which regulate the genes expression or become a kind of ‘phenotypic filter’. The paradox of today’s situation is that the technical tools derived from a reductionist approach to cell functioning nevertheless contributed to undermine the basic assumptions of the paradigm. Meanwhile, the scientists who are looking for a new paradigm – because of these improvements – are stuck because the old reductionist paradigm is still remarkably efficient. Large-scale genome sequencing efforts continue to provide a wealth of information about the functions performed by living organisms and about their dysfunctions as well, and the computer metaphor is still the inspiring source of synthetic biology.[6] However the information provided by bio-computing may not necessarily reinforce the central dogma. The dilemma can be summarized in a few words:
either the technological power acquired by investigative tools derived
from the computer metaphor will reinforce the reductionism common to
molecular biology and computer science, or it will be undermined by the
emergence of properties that will gradually lead to replace the
metaphor by models of complexity. 5. ConclusionHow are we to characterize the nano-bio
‘convergence’ in this context? The researchers I have interviewed
consider this convergence from a strictly instrumental point of view.
Will nanotechnology provide biological research with still more
efficient tools, thus improving the control of genetic engineering,
which up to now came close to tinkering – such as transgenesis, nuclear
transfer for cloning, etc.? The perspective of nanotechnology
enabling biotechnology raised skepticism, distrust, and sometimes
strong refusal among the scientists who were interviewed. They are
skeptical when they consider that, despite the significant improvements
provided by microfluidics in laboratory research, ‘weird and
unpredictable interactions’ can occur at the nanolevel between fluidic
molecules and nanotube molecules because of unavoidable surface
tensions. They distrust the enabling power of nanotechnology when they
consider that the major issues at stake in today’s biology are more
conceptual than technological, and that it is therefore essential,
instead of only improving the research tools, to get rid of the
reductionist paradigm molecular biology has stuck itself into, and to
formalize the interactions within the genome and between the nucleus
and the cytoplasm at the cell level. They are reluctant regarding the
expensive design of new technologies of (micro)bio-computing that has
already lead to a concentration of financial resources on specific
research programs, at the expense of alternative promising research
pathways. This phenomenon of resource concentration could be reinforced
by the convergence with nanotechnology. Notes[1] As is the case in geographic
information systems (GIS) and biocomputing. [2] Consider for instance the genetic determinism of pedophilia assumed by President Sarkozy in a recent campaign against deviant teen-agers. [3] Microfluidic devices reducing the sensibility limits and the analysis time for the measuring of proteins, contaminants, etc. [4] The researches of the INRA’s Developmental Biology and Reproduction Unit which is involved in the experiments of mammal cloningexplicitly aim to study the embryonic development not as programmed mechanism, but as a series of ‘dialogues’ that take place – more or less successfully – between the ovula’s nucleus and cytoplasm, between the embryo and its uterine environment at different stages. It is a way to focus on epigenetic regulation presiding over the development of organisms, but also on the fact that cloning does not mean replicating a genome: the ‘reprogramming’ of the somatic nucleus by the activated ovula transforms the genome and leads to methylations preventing certain gene expressions. [5] Such mechanisms might be constituting an emerging property. [6] See the papers by Morange and Bensaude-Vincent in this issue.
References Atlan, H.: 1999, La fin du "tout génétique"? Vers de nouveaux paradigmes en biologie, Paris, INRA Éditions. Kahn, A.: 1996, Société et révolution biologique – Pour une éthique de la responsabilité, Paris, INRA Éditions. Mayr, E.: 1961, ‘Cause and Effect in Biology’, Science, 134, 1501-1506. Raphaël Larrère: |