HYLE--International Journal for Philosophy of Chemistry, Vol. 10, No.1 (2004), pp. 23-46
http://www.hyle.org
Copyright Ó 2004 by HYLE and Andrea Tontini


HYLE Article

 

On the Limits of Chemical Knowledge

Andrea Tontini*

  

Abstract: Constraints on the representational capability of the language by which, in a simplistic yet truthful manner, chemists state knowledge of the spatial and electronic structure of molecules, are imposed by (a) the impossibility to prepare every conceivable compound bearing a specific structural fragment; and (b) objective limitations in our synthetic capabilities. Because intra- and intermolecular organization is depicted with a limited degree of detail, the prediction and explanation of chemical reactivity is hampered, and even more so our understanding of the molecular mechanisms underlying phenomena at higher levels of complexity. Epistemologically speaking, however, predictive failures are not entirely negative, as they often signal unprecedented chemical properties or events.

Keywords: chemical language, structural formulas, chemical synthesis, limits of chemical knowledge, realism.
 

Introduction

Modern science has promoted a considerable advance of our understanding of the physical world, inducing likewise those technical developments which have secured a substantial improvement of our standard of living. Somewhat paradoxically, however, twentieth-century scientific achievements have made us perceive, possibly better than in previous centuries, the immense, eluding complexity of the corporeal being.[1] While the French chemist Marcellin Berthelot (1827-1907) could proclaim that "[t]he universe keeps no more secrets today",[2] nowadays one cannot but smile at the naivety of such a statement. Indeed, increasingly during the twentieth century, scientists had to face the problem of the existence of apparently impassable barriers to the investigation of the structure of the material world.[3] The inherent limits of scientific knowledge are much discussed and a vast topic, owing to the diversity of the theoretical principles and the experimental procedures used in the various disciplines of modern science. It comprises issues as dissimilar as, for example, the incompleteness of axiomatic (e.g. arithmetic) systems, Heisenberg’s uncertainty principle, unpredictability of the behavior of complex systems, and the medical significance of genetic information (e.g. the ability to predict diseases from specific genes).[4] Interestingly, though, it would still be possible, according to the cosmologist J.B. Hartle, to divide the limits of scientific knowledge into three main groups.[5] Even if "[n]o claim is made" by the author "that these are the only kind of limits", he holds that they "have a general character that is inherent in the nature of the scientific enterprise" (Hartle 1996, pp. 116-7). Briefly, he identifies

  1. the "difference between what could be observed and what could be predicted" as a first kind of limit, a limit inescapably issuing from the contrast between the intricate complexity of the world and the simplicity of the "laws governing the regularities of that world";[6]
  2. limits due to the fact that "even simple theories may require intractable or impossible computations to yield specific predictions";
  3. a "third kind of limit [concerning] our ability to know theories through the process of induction and test".

This classificatory system is in my opinion a good framework to analyze restrictions to knowledge existing in molecular sciences. Limits of type B, i.e., the noncomputability of chemical phenomena, will not be dwelled upon in the present paper, as it has been widely discussed by theoreticians and philosophers of chemistry, especially with regard to the issue of the reducibility of chemistry to physics.[7] For the sake of completeness, I will make just a few brief remarks. Computational constraints essentially concern the wide area of chemistry known as physical chemistry (comprising disciplines like quantum chemistry and chemical thermodynamics) that deals with the search of the fundamental laws governing chemical phenomena to be carried out by physical methods and expressed mathematically. For example, satisfactory pure quantum mechanical descriptions are currently achievable only for very simple molecules; even though ideally possible, the derivation of quantum mechanical models of larger systems is in fact a computationally intractable problem. This example suggests that type B limitations of chemical knowledge are not limitations ‘in principle’, but operational constraints, determined by the volume of calculations today’s processors allow performing.[8]

Philosophers of science have paid less attention to putative type A restrictions to the cognitive abilities of the chemist. Yet wide sectors of the ‘central science’ (such as synthetic chemistry, biochemistry, mass spectrometry, etc.) deal with processes originated by entities (namely, molecules) whose complexity is much greater than that of the systems studied by physical disciplines.[9] In addition, the theoretical language used by chemists to describe those processes, and consequently to construct hypotheses or make predictions, is much less sophisticated than that, deeply grounded in mathematics, employed in physical and physicochemical sciences! In this article, I will therefore address the issue of the existence of type A limitations in the field of chemistry known as synthetic organic chemistry which is, epistemologically speaking, particularly important. I will illustrate how the prediction and explanation of chemical reactivity and non-bonded interactions in molecular-structural terms are hampered, by dealing first with organic reactions under controlled conditions (Sect. II) and then with events at higher levels of complexity, e.g. biological actions of drug molecules (Sect. V). I will argue that these limits are due to the highly schematic quality of our representations of molecular structure (Sect. IV), which depends, inter alia, on the existence of barriers to the synthesis of new chemical compounds (Sect. III). The discussion of our subject will be preceded by a brief analysis of the methodology of preparative chemistry (Sect. I-II), intended to form a background against which subsequent ideas can be derived.
 
 

I. The language of chemical sciences: its structure and emanation from the practice of chemical synthesis

The three main scientific disciplines, physics, chemistry, and biology, are usually arranged in a reductive hierarchy according to the degree of complexity of the systems with which they deal. Following this scheme, chemistry, whose fundamental cognitive aim is to understand how the structure of molecules determines the properties of natural substances and composite material systems, is positioned between physics, for which the atom is a fundamental target of interest, and biology, which considers the cell, i.e. a system composed of numerous molecules, as a basic object of investigation. This disposition somewhat overshadows what I deem the most essential trait of chemistry, which becomes immediately manifest when we consider chemistry not from the point of view of its theoretical constructs, but of its object of study. In its most fundamental sense, chemistry is the science of substances, that is, that province of modern science that deals with the transformations of material substances, either artificially induced or spontaneously occurring. As any textbook of the history of chemistry shows, it was precisely because of chemists’ interest in this specific aspect of reality that a peculiarly chemical scientific language has been developed.[10] The productiveness of this language, centered on the notion of structural formulas, is impressive. Suffice it to say that thanks to it the execution of a huge number of reactions between the most diverse chemical substances has been possible, leading to the isolation of millions of products, among which we find the materials partaking in the composition of virtually all the manufactured goods we employ in our daily life. Importantly, the use of this language is not limited to preparative chemistry, but is essential to every field of chemical research including physical chemistry, as data from, for instance, microcalorimetry, spectroscopy, and reaction kinetics would hardly be of any utility, were we not able to interpret them in molecular-structural terms.

The structure of chemical language will now be briefly examined, beginning with its logical framework, before we deal with the theoretical and conceptual elements that enrich its semantic content in Sections II.2 and II.3. Even if mathematics plays a minor part in it, while semi-quantitative and even qualitative concepts become of central importance, modern chemical language is as remote from ordinary language as is the language of the exact sciences. Its elements (symbols, formulas, concepts), too, have a definite, unambiguous semantic value, and are linked together in accordance with logical principles. Despite its formalism being quite unsophisticated compared to that of physics, the language of synthetic chemistry can be considered as a genuinely scientific language.[11] Basically a formal language, it comprises monosemic symbols organized in accordance with logical rules. Specifically, each atom type is identified by a letter or a syllable, which can be encircled with a fixed number of dots representing outer shell electrons. The resulting symbols are the units of structural formulas, which can be built according to the octet rule, and transformed into other structural formulas by drawing proper chemical equations. Since Ingold’s introduction of reaction mechanisms theory, a molecular transformation can be represented as a series of consequential events. This is done by resolving, according to other quite simple logical rules (how to ‘move’ electrons, balance charges, etc.) the structural change, globally expressed by a chemical equation, into a number of chemically plausible intermediate stages. The formalization of reaction mechanisms is particularly useful to interpret experimental outcomes and modulate reaction conditions accordingly (Section II.5).
 
 

II. Types of reaction outcomes. The way scientific knowledge is expanded by synthetic chemistry research

Describing the synthesis of a new chemical compound, i.e., establishing the experimental conditions under which it forms, finding a suitable analytical procedure to isolate it as a chemically pure material, and assigning the correct structural formula to it, are the elementary tasks of preparative chemists. These operations are a way of immobilizing, freezing, or objectifying the formal ‘principles’ that determine how atoms tend to be arranged in dependence on environmental conditions.

In general, when a preparative chemist sets to work he targets a definite structure, the synthesis of which he must first of all conjecture by analogy with known synthetic methods.[12] The reaction is then attempted and, if it fails to give the desired product or provides it only at low yield, the synthetic hypothesis is reformulated or abandoned in favor of alternative hypotheses.

The conclusions our chemist comes to, once he decides to discontinue the cycle of theoretical reexamination and experimental testing, belong to one of the following categories.
 

1. Results according to expectations

The first case can be formulated as follows: compounds X1,2,...,n are known to give Y1,2,...,n under certain conditions; it is further found that, under the same conditions, Xn+1 gives Yn+1. Here X represents a set of molecules having in common a given functional group, e.g. ‘alcohols’ (general formula R-OH), or a number of elements arranged in a specific manner, e.g. ‘linear aliphatic alcohols’. Correspondingly, Y may, for example, mean ‘aldehydes’ (R-CHO), or ‘linear aliphatic chlorides’.[13] The fact that the substances in our hands react according to expectations does not add very much to our chemical knowledge: extending the scope of known synthetic protocols to other substances of a given structural class makes the number of characterized chemical substances increase in an additive, and not in a multiplicative, fashion (see below). Besides, results according to expectations do not increase the semantic density of chemical signs, unless they are obtained in the context of a structure-reactivity relation study. A deeper knowledge of the mechanisms by which the stereoelectronic properties of functional groups, or structural fragments, determine molecular events, can in fact be gained by studies of this kind. Substituent effects on reactivity can presently be investigated in a rather advanced, quantitative fashion, that is, by establishing linear free energy relationships (LFER), a technique whose foundations were laid by Louis P. Hammett (1894-1987) in the 1930s. The idea behind structure-reactivity investigations can be illustrated by the following simple case: consider a set of four substances of formula R-A1, R-A2, R-A3, and R-A4, where R is a constant residue, and Ai different substituents. Suppose to subject each of the four compounds to identical conditions, and assume that the reaction yield (or the reaction rate) decreases in a relatively regular manner from R-A1 to R-A4. If a parameter (say, lipophilicity) of substituents Ai can be found to vary in a similar way, it can be hypothesized that such a property directly affects the energetics of the reaction, which allows one to draw conclusions about its molecular mechanism.
 

2. Syntheses which prove to be successful only after adjusting experimental conditions

As chemists’ direct experience shows, even if one can very often be confident of getting the expected product with an acceptable yield, it can also happen that a seemingly unproblematic reaction actually fails to proceed (sometimes this is simply due to solubility problems), affords a degradation slurry, or produces unexpected compounds (see below). Sometimes it is sufficient to vary reaction conditions (for instance raising or lowering the temperature, adding an opportune catalyst, changing the solvent, the concentration of reactants, molar ratios, or addition sequence) in order to obtain Yn+1 from Xn+1 with a satisfactory yield.[14] To decide on such changes, on such reformulations of the synthetic method, and to explain negative results (Section II.4), anomalous reactions (Section II.5), and similarities/dissimilarities in the behavior of different classes of substances (Section II.3), chemists have elaborated a network of powerful qualitative or semi-quantitative concepts. Specifically, we have concepts based on the geometrical representation of molecular structure (e.g. steric hindrance, strain, conformational motion), concepts drawing inspiration from elementary electrological notions (e.g. electron donation/withdrawal, charge dispersion, electronegativity), and concepts accounting for the unique behavior of certain classes of molecules (e.g. aromaticity, nucleophilicity, resonance hybrid). These concepts have been normally induced from a large number of experimental observations and represent tools which are indispensable to present-day research. Without them, structural formulas would be purely logical entities, syntactic constructs conveying practically no information about the properties of any of the individual substances they symbolize.[15]
 

3. Extension of the scope of a synthetic method to structurally analogous compounds

The scope of a reaction applicable to class X molecules can sometimes be expanded to one or more substances that, even if not belonging to X, are considered sufficiently similar to X-compounds (let us call them W-compounds). We cannot define in too rigorous a manner the idea of structural relatedness. Structural similarity is claimed, for example, between heterocyclic analogs (e.g. benzene- and pyridine-derivatives), between compounds sharing a structural unit (e.g. a specific bond, as in aldehydes and ketones), or between molecules differing only at one position where atoms are present belonging to the same group of the periodic table (e.g. thiols and alcohols). Applying the same reaction to X- and W-compounds allows tracing similarities (or, in case of failure, dissimilarities) between the properties of the structural features that differentiate these classes of substances (e.g. the -OH and the -SH group), at least as regards a particular reaction mechanism.
 

4. Failures

The fact that, even after repeated attempts at changing operational conditions, a certain reaction takes a course different from the one we expected is anything but a remote possibility. In effect, sometimes the introduction of just one group in a molecule, or the substitution of a certain group for another, can make the yield of a reaction substantially lower, or block the reaction altogether.[16] In that case, the reactants may be recovered unaltered; they may undergo degradation, giving an intractable mixture; or a certain number of mechanistically trivial side-products be formed. Even results of this kind have their utility by indicating the limits of applicability of a given synthetic method. In addition, they can provide insight into the molecular mechanism of a reaction.
 

5. Reaction following novel pathways

The last case can be expressed as follows: compounds X1,2,...,n are known to give Y1,2,...,n under a given set of conditions; we found that, under the same or similar conditions, Xn+1 gives, say, Zn+1 with moderate to good yield. This is actually a very important case.[17] Many unprecedented reactions are in fact discovered by chance, that is to say, by detecting the atypical pathways certain compounds happen to take under the experimental conditions of otherwise well-established reactions. The possibility that a given organic (or inorganic) reaction can generate other reactions is what enables the current conspicuous advance of chemistry. Indeed, the central science seems at the moment to be the most active of the three principal sciences (physics, chemistry, and biology), at least when the flow of scientific papers is used as a scientometric measure (Schummer 1997). The reason for this would be that chemical progress rests upon what may be called a virtuous mechanism. This could be stated as follows: the more compounds we prepare and study, the greater are the chances to discover new reactions and the greater is our ability to prepare still further compounds, which again can widen the number of feasible chemical transformations. The growth of substances made in chemical laboratories has a propagating structure. Importantly, such exponential proliferation of new substances goes along with a vigorous refinement of the theoretical equipment of the chemist, since the greater the number of compounds available for reactive experiments, the greater the probability to incur results of the sort described in Sections II.2-4.

Finally, in order to be publishable material, the results obtained by synthetic researchers must be of the type stated in Section II.5 and, possibly, Sections II.2 and II.3. On the contrary, outcomes of the type discussed in Sections II.1 and II.4 are as a rule not considered worth submitting to a research journal, unless they are part of larger sets of data, or, in the case of results according to Section II.1, serve to trace structure-reactivity relationships.
 
 

III. The relational structure and the incompleteness of chemical knowledge

1. The chemical network

Reporting the synthesis of a new chemical compound cannot be regarded as an isolated accomplishment. Once published, the structure of that compound becomes part of a pre-existing classificatory system. More precisely, it is an item added to the lists of compounds bearing one or more of its functional groups. Classes of molecules bearing functional groups are connectable by links representing feasible chemical reactions. On this ground, Schummer (1998a) recently proposed that it is precisely the continuous extension of the network of convertibility relationships between substances of sufficient purity which forms the core of chemical knowledge. In Schummer’s model, any given substance, identified by its structural formula, represents a node within the ‘chemical network’, node-to-node connections being codified by experimentally validated protocols for functional group transformations. The network of convertibility relationships chemists have constructed, the author convincingly argues, is actually a very limited one, since chemists can only study reactions between pure substances. In principle, a richer frame of knowledge could be constituted, were they capable of establishing empirical relations between quasi-molecular species (i.e., ionized forms, different conformational states, van der Waals or dipolar complexes formed by molecules). Such entities cannot, however, be isolated in pure form, since their structural identity is altered by whatever manipulation they are submitted to.
 

2. Combinatorial limits on the number of obtainable chemicals

Schummer’s observations help us understand a crucial point. We have no other way to understand molecular reality than by studying the behavior of purified chemical substances. That implies that the breadth of chemical knowledge is determined by the number of structurally characterized substances. By chemical knowledge here I do not simply mean a vaster and vaster exploration of the multifarious forms matter can structure itself into at the molecular scale, but also the evolution of the theoretical and conceptual apparatus of chemical language. As already mentioned, this dimension of chemical knowledge, too, is strictly dependent on the enlargement of the chemical network. From a formal point of view a nitro group, say, is today exactly the same entity it was in the 1920s. However, the ‘semantic density’ of the expression ‘-NO2’ has now become greater, because much more is known about the chemical transformations the nitro group may undergo under definite sets of conditions, and how it affects the reactivity of chemical compounds when present in their structures. Owing to the exponential growth of the number of new chemical substances, driven by the above-described ‘virtuous mechanism’, chemical knowledge would therefore be bound to dilate (this, of course, provided that economic and social conditions supporting such a development be present). However, since it is clearly impossible to extend to infinity the chemical network, chemical knowledge will always remain knowledge in progress, the result of an incomplete process. Before I prepare, and study the chemistry of, any possible and conceivable (again, say) nitro compound, I will not have thoroughly probed the chemical properties of that group, I will not have known its definitive ‘law’. In more general terms, the fact the number of chemical substances that can be synthesized is in theory unlimited act as an ultimate, impassable barrier to the advance of chemical knowledge, in that the informational content of functional groups, structural formulas, reaction mechanisms, in short, of the logical elements that constitute chemical discourse, cannot be expanded to its highest possible level.

Rather interesting (though necessarily abstract and almost bordering on Borgesian fiction) speculations arise from taking this idea to an extreme. In order to exhaust chemically derived knowledge of, say, a given structural formula, we would have to synthesize every possible compound bearing one or more of its structural elements (e.g., a nitro group, an aromatic ring, a Csp2-Csp2 bond...), which is clearly impossible. Anyway, if, ab absurdo, this aim were accomplished, then it would be meaningless to speak of chemical knowledge, at least as a form of scientific knowledge. Nothing would in fact remain to be explained or predicted in the realm of chemical reactivity. We could directly access an immense, all-encompassing collection of data regarding chemical transformations, and this would render superfluous the use of a specific theoretical language, able to mediate between empirical reality and human intelligence.

3. Ontological restrictions on the structural diversity of chemically synthesized products

Not only the number, but also the type, or, to be more technical, the structural diversity, of the compounds that at a given point in time we have at our disposal to perform chemical experiments is circumscribed. It would be, in fact, erroneous to think of contemporary synthetic chemistry as a technique enabling one to build molecules with any desirable or imaginable structure. In the chemical network, there are node-to-node links which cannot be concretized, and consequently, synthetic routes which cannot be followed. Any chemically plausible transformation can, of course, be imaged, can take shape in your mind as a chemical equation. In every synthetic laboratory, there are pieces of paper on which equations are scribbled, i.e. two structural formulas separated by an arrow above which are symbols like THF, D, H+. (You also see them on the fume hoods panes when people have permission to write there!) Expressions like these are, however, nothing but scientific hypotheses that can or cannot be validated by experiments. We mentioned above that some reactions do not happen according to first hypotheses, because of electronic and steric substituent effects that are difficult to foresee; indeed, such effects are frequently explained only a posteriori (Section II.3). That is why our capacity to transform matter by standard functional group chemistry is, though vast today, very far from being boundless.[18] Even today, synthesizing compounds whose structure is fairly big or elaborate is, in fact, rather a difficult task.

It has been pointed out (Hoffmann 1995, pp. 87-94; Vinti 1994) that the distinguishing feature of (pure) chemistry is that it creates the objects it deals with. This is of course true. However, that is not tantamount to reducing chemistry to a moulding technique through which matter would be freely manipulated at the molecular level. Assembling a molecule is first of all a cognitive act. Any chemical synthesis bears evidence of a formal disposition inherent in matter independently of human thought or will. A molecule can at the most be ‘invited’ to follow a given reaction pathway, i.e. placed under a set of conditions known to predispose other compounds to that process, but not ‘forced’ to undergo it. The fact that chemical compounds do not always react according to our hypotheses is in part attributable to the simplified nature of current chemical representations. Were such representations more sophisticated, we would certainly be better at devising the experimental conditions by choosing, for example, a better catalyst. There are also objective reasons, however, such as specific steric and electronic features that impede certain chemical reactions. The reactivity of chemical substances is ultimately beyond our control. We have no access to an undefinable number of observations, which we would need for a complete comprehension of molecular structure and for total predictive power and synthetic capabilities. Chemical knowledge cannot be exhausted, not only for logical reasons but also, if I may use the word, for ontological ones. Schummer (1998b, sect. 4.5) recently argued that predication of material properties (including chemical properties) will never be completed, since the number of chemically synthesized products can be increased ad libitum. This may be integrated by saying that while there appears to be no limitations as to how many structures chemists can synthesize, there seem to exist restraints as to what structures they can synthesize. Chemical knowledge proceeds in a specific direction, a biased, self-determinative direction. To speak in metaphors, it is as if light from a source would not be irradiated in all directions but only within a small cone. As such a light-beam gets away from its source, it illuminates an increasingly larger area. However, this area will ever remain only a small portion of the surface of all spheres centered on the light source.
 
 

IV. The linguistic frontiers of chemical knowledge

It should be clear from the previous discussion that chemical knowledge is not simply a list of protocols for reproducible chemical transformations. Chemical knowledge is also the endeavor to scientifically represent, to translate into a theoretical language, the organization and actions of matter at the molecular level. We saw above that pursuing synthetic chemistry has not only been important for practical aims; it has also been instrumental to the emergence of the modern chemical language. Concepts, such as functional group, structural formula, and reaction mechanism, though firmly established since a long time as logical elements of that language, are subject to a continuous informational enhancement, driven by the multiplication of new chemical substances with which chemical and analytical experiments can be performed. Since such substances are by necessity, at any given point of time, finite in number (Section III.2) and restricted in variety (Section III.3), chemical representations of molecules will always bear a semantic lack, due to a large degree of simplification and abstractedness.[19] That imposes type A limits (see Introduction) to the predictability of chemical phenomena.

To examine this issue in more detail, let us, first of all, ask ourselves: what is signified by structural formulas? How much information is conveyed to us, how much reality is brought into light, by these pivotal elements of chemical language? The main import of structural formulas, I agree with Schummer, is that of "represent[ing] substances in certain relations with each other, i.e. substances within the chemical network" (Schummer 1998a, p. 150). A structural formula, however, cannot be regarded simply as a list of functional groups. Besides showing the presence of one or more replaceable fragments in a molecule, a structural formula suggests how the rest of the structure may affect the reactivity of that same fragment. Thus, by exclusively relying on functional group logic, one would not be able to tell, for example, why aromatic substitution of 1,3-disubstituted benzenes mainly leads to 4- (and not to 2-) substituted regioisomers. This data is, however, easily interpretable in terms of steric hindrance.[20] The information of structural formulas about the spatial relations and the charge density distribution between the different parts of a molecule has, in fact, greatly speeded up the expansion of the chemical network.[21]

This deters me, especially when I consider how efficient and rapid that expansion is, from treating structural formulas as purely conventional signs serving as heuristic devices to pilot chemical synthesis. Rather, I am inclined to conceive of them as relatively faithful stereoelectronic replicas of the microscopic objects they designate, namely molecules. This is not, of course, a rigorous argument in support of a realistic interpretation of molecular theory. More convincing arguments for this epistemological position have been advanced by Del Re, who in a recent work (1998) has challenged the presumed incompatibility between quantum mechanics and the idea of the objective existence of molecular structure. For my part, I tried in a former paper (Tontini 1999, pp. 66-71) to justify on different grounds my basically realistic conception of chemical knowledge, and here will take the liberty to add just a brief remark to complete that discourse.

Laszlo (1998, p. 35) holds that the idea of a purely conventional nature of chemical notation is supported by the existence, and accepted use, of alternative ways of portraying molecular structure. Now, that claim is tantamount to denying that a flower, a building, or a face can be represented in a realistic manner by pencil drawing, because pictures of these same objects can be obtained also by color photography. Of course, one cannot experience, on looking at a pencil drawing of a flower, the color of its corolla. But if one sees, say, five petals in the drawing, one will also see five petals in the photograph of the flower. Representations of an object may be sketchy and quite unlike one another, yet veridical, provided that there is no logical inconsistency between them. Which is exactly the case of the different types of molecular representations: the structural formula of a given compound must, to be valid, be in accord with the molecular formula of the same compound; a three-dimensional model of a molecule, deduced by, say, X-ray spectrometry, comprising bond lengths and angles, is only valid if consistent with the structural formula of that same molecule.[22]

However, structural formulas are highly stylized representations of very complex material systems. Molecules are not directly accessible to our senses. What we can do is to interpret reactivity and spectroscopic data in order to reconstruct very basic features of the ‘geometry’ of such microscopic particles on the analogy of the spatial relationships we establish between macroscopic objects. That molecular organization is a deeply complex reality, of which we can grasp only the surface through chemical experimentation, is suggested by certain portions of the spectrograms of pure chemical substances. While, for example, the bands between 2.5 and 11.0 mm of an infrared (IR) spectrum of a medium-sized organic compound can be easily attributed to certain interatomic bonds or molecular fragments, the 7.0-11.0 mm region of the spectrum assumes rather a complicated shape that is unique to the molecule under analysis (and therefore called ‘fingerprint’), due to the many transitions between quantisized rotational states of the molecules in this absorption range. Thus, although they have almost identical spectra between 2.5 and 7.0 mm, compounds with a pronounced structural similarity can as a rule be distinguished by the fingerprint region. This part of IR spectra is evidence of ‘hidden’ aspects of molecular structure, aspects we are not able to explain, to translate into classical chemical language (though it is possible to analyze them by quantum mechanics), and, therefore, perceive as a unique trait, as something that has to do with molecular ‘identity’. Actually, every kind of spectrum obtained by the application of a physical method of analysis is not entirely explainable in chemical terms. Generally speaking, we see signals in it and we see background noise; but we often see features, too, which cannot be interpreted in molecular-structural terms. Evidence of the deep complexity of molecular structure is also provided by the continuous advances that take place in spectroscopic analysis. NMR spectroscopy, for example, has allowed the discovery of phenomena like NOE (nuclear overhauser effect) and spin-spin coupling, revealing very interesting properties of molecular structure.[23]

The ‘essentialist’ view of chemical knowledge, according to which chemists would gain a complete understanding of molecular texture, seems to me absurd.[24] Structural formulas (and the other ways of representing molecular structure, for that matter) are only to a minimal extent informative about states originated via conformational motion or energy absorption from the outside, activated complexes, labile aggregates resulting from interaction with other molecules, and so forth. Every chemical event, be it an organic reaction under defined experimental conditions or a finely regulated biochemical process (see below), is mediated by such quasi-molecular entities. Thus, it is as if we were looking at its mechanism only from a distance that makes innumerable details impalpable.[25] That is why our ability to foresee the outcomes of chemical processes is so limited.

Is this predictive weakness just a transient feature of chemical theory? Or is it, instead, a constitutional characteristic of chemistry, issuing as a direct consequence of the way chemistry works? The ideas expressed thus far (see in particular Sections III.2-3) support the second hypothesis: chemical language will permanently be deficient in predictive power and unexpected results will never be eliminated from chemical research.

Rather interesting epistemological considerations can be made in connection with this. Apparently, the same elements that hamper chemical prediction are also the driving force of chemical discovery. As was mentioned in Section II, unexpected results do not only magnify the expansive potential of the chemical network, they are also one of the key-factors in the theoretical evolution of chemistry. Besides opening novel reaction pathways, unexpected results hint at properties peculiar to certain functional groups or molecular fragments. By detecting anomalies in the reactivity or the spectroscopic behavior of pure chemical substances, we gain insight into the inner organization of molecular reality and, thus, in part understand the subtle factors regulating chemical events. Such understanding includes, for instance, why a given compound is soluble in water and not in toluene, why it decomposes on heating to form this or that series of products, why it is prone to undergo this or that reaction. In brief, somewhat paradoxically, the predictive power of chemistry seems to increase through its setbacks.

The mechanism by which chemical knowledge is generated and expanded, which I have described in this paper, is what lies behind one of the fundamental differences between chemical laws and physical laws. Villani has tersely illustrated that difference:

[The laws of chemistry] are not necessity laws, but limitative norms. […] [T]he systems, which obey a [chemical] law […] are not identical, but only analogous, with one another. Therefore, when dealing with a new case, one cannot be sure that the system under examination is as analogous as to behave according to that law. [Villani 1994, pp. 179, 177-8; my translation]

If my epistemological model is correct, this ‘probabilistic’ feature is a stable, ingrained component of chemical cognition.
 
 

V. The impact of type A limits on applied chemistry: what do we know of the molecular mechanisms behind biological processes?

Classically, chemical knowledge is considered to be divided into two main fields, those of pure and applied chemistry. While the object of pure chemistry is to delineate molecular structure and, on that basis, to explain physical and reactive properties of chemical compounds, the various branches of applied chemistry endeavor to describe the intimate organization and the functioning of natural or artificial systems in terms of the properties of the molecules such systems are composed of. Theoretical simplification has a more profound effect on the explanatory and predictive capacity of disciplines like biochemistry, geochemistry, medicinal chemistry, materials science, etc., than on that of synthetic organic chemistry. In this last section, I will briefly analyze the problems brought about in the area of research that for professional reasons I am more conversant with, medicinal chemistry, by what I have called type A limitations to chemical knowledge. Much of what will be said may be extended, mutatis mutandis, to any other branch of applied chemistry.

In the last decades, innumerable studies from areas between chemistry and biology have greatly advanced the understanding of the chemical basis of a vast array of biological phenomena. However, if one were to examine the results of those studies critically, some perplexities would arise on the adequacy of the current chemical language as a means for representing the chemistry of biological processes. These processes are based on delicate, highly integrated molecular mechanisms. Specifically, biological phenomena are initiated either by physical agents (light, heat, etc.) or by molecular recognition events involving one or more of the innumerable macromolecular substances produced by living cells (e.g. a globular enzyme, a membrane receptor, a DNA segment, an immunoglobulin) and an endogenous or xenobiotic small-to-medium-sized chemical species (e.g. a neurotransmitter, a vitamin, a drug, an inorganic ion). Such agents bind to each other in a highly selective manner through a combination of electrostatic, van der Waals, and hydrophobic forces. The resulting noncovalent complex [26] triggers a cascade of further chemical events, ultimately leading to a specific physiological effect. How does the bioscientist gain knowledge about these processes? Ordinarily, the experimental data necessary to characterize the biologically significant interactions between a small molecule and a protein or between two proteins is obtained by bringing biological systems into contact with (variable doses of) suitable chemical products. For instance, the fact that the pharmacological activity of acetylcholin (ACh) is mediated by dissimilar receptors (a major breakthrough in neurochemical science, leading to the understanding of fundamental aspects of parasympathetic structure and function, which has had important therapeutic implications) was demonstrated by studies employing compounds, such as nicotine and muscarine, assumed to counterfeit (to mimic, as is commonly said) distinct ‘conformationally frozen’ acetyilcholines.[27] Such an approach of employing purified chemical substances as an indirect means for the investigation of biological processes entails a translation of highly complex material phenomena into a relatively simple theoretical language, namely, the language of structural formulas. It will, therefore, incur drawbacks of the kind already described. We saw that every chemical substance is identified by one structural formula, a theoretical expression that is certainly useful to predict chemical reactions. Unfortunately, however, a structural formula is poorly informative about subtle processes (conformational and ionization equilibria, formation of aggregates or transient complexes with other – e.g. water – molecules, etc.) that are governed by the ‘internal’ properties of the molecules. Now, biological effects are regulated precisely by that type of processes. The actions of ACh are, for example, ascribable to different rotamers of the molecule, interacting in a selective manner with the active sites of the different ACh receptor subtypes. And structural formulas for substances like nicotine and muscarine are nothing but very approximate, static models of such conformational states of ACh.

Rigidity is a disadvantage also intrinsic to experimentally (i.e. by X-ray, and more recently, NMR spectroscopy) or computationally derived 3D models of biopolymers, among which, in particular, proteins.[28] As for the first kind of models, the structure of an enzyme as determined by, for instance, X-ray diffraction may happen to be devoid of biological relevance, because the crystallographer takes, so to speak, only a snapshot of the conformation that the enzyme assumes when it is packed into a crystal, a conformation which can be different from the predominating one in solution, e.g. in the aqueous environment of the cell. With regard to the second kind of models, i.e. computational models,[29] it is now acknowledged that they are effective tools for medicinal chemistry research. Some have turned out to be in good agreement with experimental models, which suggests that they are realistic representations of protein tertiary structure. However, one major drawback is, again, their static quality, as exemplified by Hoflack et al.’s remarks on models of G protein-coupled receptors:

The 3D models […] clearly suffer from a large number of hypotheses and approximations. In order to perform a ‘realistic’ simulation of a G protein-coupled receptor transmitting an extracellular message, a molecular dynamics study is necessary involving the transmembrane region of the receptor, its intra- and extracellular loops, a large number of water molecules surrounding the cell membranes, the lipid bilayer, the G protein, GDP and GTP, and this for a simulation period of at least 1 ms. It is clear that with the present technology, such a simulation is impossible. [Hoflack et al. 1993, p. 94]

Here, Hoflack assumes that the quality of the models depends on computing power, which is, of course, in part true. Conceivably, by working with better computers, theoretical chemists will in the future be able to construct more and more exact and faithful models. Up to now, however, the traditional conception of molecular structure has been taken as a necessary starting point for the modeling of biopolymers. Thus, unless ab initio methods for calculating the spatial structure of proteins are developed, more fundamental limits of type A will invariably come into play. Note that computing a protein’s conformation on the basis of its aminoacid sequence is a hitherto essentially unresolved problem. Even though very encouraging results have been reported in a recent paper by Simmerling et al. (2002) on the possibility to obtain 3D structures of small proteins by means of molecular dynamics simulations, with excellent convergence to experimentally derived models, the extension to bigger systems is a challenge whose outcome may be uncertain. "There is no reason in principle why we could not understand the algorithm by which a sequence determines structure, and, therefore, be able to predict structure from sequence; it’s just a very difficult problem", said an expert in the field (Tramontano 1993, p. 40, my translation). One may observe that Tramontano’s "sequence [of a protein]" is tantamount to "structural formula of a protein molecule" and that the difficulty she points out comes, yet again, from the gap between a deeply complex phenomenon, protein folding, that is thought to occur via sequences of subtly cooperative events, and the semantically poor, schematic description of molecular structure that is typical of classical chemical theory.
 
 

Conclusions

Reconsidering from a chemical perspective a statement by Hartle (1996) cited above (Note 6) provides a brief recapitulation of the arguments developed thus far:

[Molecular models] must have some degree of simplicity to be discoverable, comprehensible, and effectively applicable by human beings […] If the complexity of [chemical events] is large, then this necessary simplicity of the [models] implies that this kind of limit to [chemical] knowledge is inevitable.

Throughout the paper, we have in several ways shown the complexity of chemical phenomena and detailed the reasons why representations of molecular structures are simple. We may add, for the sake of completeness, that the need for a more profound characterization of molecular inner properties and dynamism is presently in part satisfied (not seldom with excellent results) by the methods of computational chemistry. A discussion of such methods, which are now being extensively exploited in many fields of chemical research, including synthetic chemistry, and in molecular biology, is beyond the scope of this paper. We have already noted, however, that, since molecular modeling techniques are not based on first principles – due to computational, i.e. type B, limits – but instead presuppose molecular structure as defined by means of traditional chemical experimentation, they will necessarily show traces of the approximation implicit in that mode of theoretical representation, i.e. type A limitations.
 
 

Acknowledgement

I am grateful to Dr. Joachim Schummer for his suggestions and comments.
 
 

Notes

[1]  This does not mean that modern scientific consciousness leads one to consider the physical universe as being devoid of sense. Following the upholders of the (strong) anthropic principle, I hold that the only sensible conclusion one may draw when looking at recent experiments in astrophysical research, is that the structure of the universe has been planned, so to speak, in order to permit the appearance of life in it [Cf. Bersanelli 1997, p. 51]. For a discussion of the anthropic principle, see Carreira 2002 and references therein.

[2]  Quoted in Bersanelli (1998, p. 60; my translation).

[3]  Cf. Barrow 1999.

[4]  A brief discussion on the epistemological significance of Gödel’s theorem can be found in Arecchi & Arecchi (1990, pp. 155-6); the problem of the interpretation of Heisenberg’s principle is introductorily treated in Strumia (1992, pp. 185-200); on chaotic systems, see Gleick 1987; for a case illustrating the difficulties in identifying genotype-phenotype relationships, see Cubells 2000. See Casti & Karlqvist 1996 for studies on limits internal to several distinct areas of science. It should be added here that scientific knowledge is limited in a more fundamental sense. The scientific method ultimately consists in the quantification of material properties and the search for logical relations between the variables representing such quantities. There exist realities, e.g. moral, aesthetic, and religious concepts and events, to which such method is evidently not applicable. On this, see Husserl’s The Crisis of European Sciences and Transcendental Phenomenology, and some of Putnam’s works, such as Reason, Truth and History, Renewing Philosophy, and Words and Life. Also, Arecchi & Arecchi (1990, ch. 8), Strumia (1992, chs. 1-2), and Giussani (1997, ch. 2; pp. 40-4, 97-8, 132-4) are useful reading.

[5]  Hartle 1996, pp. 116-9. The quotations in points a), b), and c) of the text following immediately are taken from the abstract of Hartle’s paper (not included in Hartle 1996), available at http://xxx.lanl.gov/abs/gr-qc/9601046.

[6]  See also p. 117: "Scientific laws must have some degree of simplicity to be discoverable, comprehensible, and effectively applicable by human beings and other complex adaptive systems. If the complexity of the present universe is large, then this necessary simplicity of the laws implies that this kind of limit to scientific knowledge is inevitable." It may be added that a radically simplified representation of nature is congenital to the scientific method. It is precisely because of the intricacy of nature that Galileo determined to consider, on approaching material phenomena, only some affections (e.g. length, motion, weight, opacity) as susceptible to measurement and commutable into quantities to be related mathematically. According to the Italian scientist’s celebrated metaphor, reported in The Assayer, the universe is to be thought of as a ‘book’ capable of being read only by one who has learned "to understand the language and know the characters in which it is written. It is written in the mathematical language, and the characters in it are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand any of its words; without these one is uselessly wandering about in a dark maze". (My translation; for the original text, see Pazzaglia 1991, p. 701). Galileo illustrates the obstacles that nature, this tangle of sensible qualities, this "oscuro labirinto", sets in the way of man’s investigation in the ‘parable’ of the investigator of sounds (see Pazzaglia 1991, pp. 701-4 for text and comments).

[7]  See on this Mosini (1994, 1995), Scerri (2000), and reference therein.

[8]  At least concerning quantum chemistry, I am rather skeptical, however, about the possibility of such constraints being eliminated in the future. In a recent article, Theodor Benfey takes a notorious assertion by Dirac ("the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known") as a starting point to demonstrate the impossibility of reducing chemistry to quantum mechanics: "Admittedly, Dirac added a sentence that is usually not quoted: ‘…the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble’. I would like to quantify the impossibility of Dirac’s claim […] I suggest that, in our mathematical computations, we are limited by the composition and the complexity of the material world we live in, by the total number of fundamental particles in the universe. That cosmic number, the ratio of the mass of the universe to the mass of a proton, has been calculated to be approximately 10 raised to the 78th power. Any computer manipulation requiring more than this number of locations for an adequate description can be declared intrinsically impossible. […] E. Bright Wilson […] some years ago informed me that as simple a structure as benzene would require more than this number for even a rough description of its electron density profile. We can argue about the exact magnitude of the cosmic constant, but this new postulate of impotence is clear. There is an intrinsic limit to what physics can do in predicting the phenomena of chemistry." (Benfey 2000, p. 198).

[9]  Allow me to report here the anecdote (told me by a colleague, G. Gatti, some time ago) of an experienced quantum chemist, who at a certain point exclaimed: "La decalina è ‘na cattedrale!" Now, if the electronic structure of decaline is a ‘cathedral’, what about a protein, a dendrimer, a big organometallic catalyst?

[10]  Elemental analysis, one of the most decisive factors for the emergence of modern chemical theory, too, implies transformation: to determine the elemental composition of a material, I have to disrupt it chemically.

[11]  Also chemists had to escape (to put it in Galilean words) the ‘maze’ of nature by reducing the flux of natural events, in this case matter transformations, to a logic, if not mathematical, discourse; the father of modern science would have been pleased, I suppose, to read current chemical literature that includes so many ‘triangles, circles, and other geometrical figures’…

[12]  This is often done by searching a chemical database, or simply one’s memory, for a reaction by which compounds structurally similar to one’s target have been previously synthesized, and by seeing whether the requisite starting materials are easily accessible chemicals. In other cases, more or less extended and inventive retrosynthetic analyses have to be carried out.

[13]  The classes X/Y can also be more restrictively determined, for example: 5-methyl-4-aryl-1H-pyrrol-2-carboxaldehydes/5-methyl-4-aryl-1H-pyrrol-2-carboxylic acids.

[14]  A nice example is Lee et al. 1991, p. 7009: the formation of the undesired silyl ketal 11, a side-product of singlet oxygen oxidation of 2,4-disubstituted furans (cpds 9) to 4-substituted-2(5H)-furanones (cpds 3), taking place when C-2 trialkysilyl groups are more voluminous than the trimethylsilyl one, could, on the basis of mechanistic considerations, be avoided by the simple addition of a catalytic amount of water to the reaction mixture.

[15]  Of course, a structural formula is not per se completely meaningless from a chemical viewpoint, as structural formula syntax, too, was inferred from experimental facts. So it could be speculated that a very rudimentary chemistry could still be done based on ‘naked’ structural formulas.

[16]  A myriad of cases could be picked out from the chemical literature. A good example is reported in Huffman et al. 2000: unlike structurally related resorcinol dimethyl ethers, cpd 10 proved to be inert to lithiation "under a variety of conditions" (pp. 440-1). The influence exerted upon the reactivity of a certain functional group by substituents not belonging to the site of reaction has been known since the second half of the 19th century. Indeed, Markovnikov clearly expressed this concept (Cf. Solovev 1971, pp. 228-9).

[17]  A significant example is reported in Sternbach 1979. The uncommon transformation of quinazoline 3-oxide 11 into benzodiazepine 4-oxide 13, the prototype of one of the most successful classes of drugs discovered in the 20th century, namely benzodiazepines, resulted from the use of methylamine instead of a secondary amine! (See Scheme I)

[18]  It is worth mentioning here that the emergence of combinatorial chemistry is only an apparent evidence of a permutational ability of chemists, allowing them to generate potentially unlimited molecular variations. The construction of a combinatorial library depends on the availability of a highly efficient (i.e. providing the desired product with a high yield and without concomitant formation of side products) solid phase protocol for any of the steps leading to the final structure. Such conditions, however, are not easy to met because, compared to traditional organic chemistry, solid phase synthesis often entails special operational problems, due to resin stability, reactants solubility, etc. That is why ‘combichem’ has up to now been chiefly applied to the preparation of collections of biooligomers (polipeptides, oligonucleotides, etc.).

[19]  Here it may also be noted that there exists a practical factor narrowing our knowledge (the adjective ‘practical’ does not necessarily imply that it may be overcome in the future), namely the difficulty or impossibility to isolate from complex mixtures substances in very low concentrations. As I suggested in a former paper (Tontini 1999, pp. 63-66), small amounts of by-products in reaction mediums indicate that an apparently rather straightforward event, such as an organic reaction under controlled conditions, is actually a complex process that a standard chemical investigation cannot grasp thoroughly, whose ‘recesses’ it cannot, so to speak, penetrate. If a number of side products are formed with, say, 1-2% yield, one cannot exclude the presence of smaller amounts of other by-products. Of course, there are reasons of practical nature that impede or discourage the isolation from reaction crudes of compounds formed in vanishingly small amounts. Were they isolable and analyzed, though, such products would add to chemical knowledge, not only by novel structures or properties, but possibly even more so because their presence could reveal unprecedented reaction pathways and help support a proposed reaction mechanism (for an example, see Trost 2002, p.697).

[20]  Here are the results of a few sample searches of the Beilstein on-line database: 17 items found for the reaction: "1,3-diarylbenzene -> 1,3-diaryl-4-G-benzene (G = any arbitrary structure)", and no items for "1,3-diarylbenzene -> 1,3-diaryl-2-G-benzene"; 4 items (among which bromination and nitration) for "1,3-di-t-Butyl-benzene -> 1,3-di-t-Butyl-4-G-benzene" and no items for "1,3-di-t-Butyl-benzene -> 1,3-di-t-Butyl-2-G-benzene"; 11 items for "1,3-di-isopropyl-benzene -> 1,3-di-isopropyl-4-G-benzene" and 2 items (G = NO2, Br) for "1,3-di-isopropyl-benzene -> 1,3-di-isopropyl-2-G-benzene". Note that, consistently with the smaller volume of an isopropyl group compared to that of a terbutyl one, the ratio between the number of items for 4- and that for 2-substituted regioisomer formation is 11 to 2 in the case of the last reaction, while formation of 2-substituted regioisomers from 1,3-di-t-Butyl-benzene has never been reported.

[21]  Chemical literature provides countless examples. I take one of them from Huffman et al. 2000: the milder conditions, thanks to which degradation of the reactants was avoided, were adopted in the synthesis of 8 from 3,5-dimethoxyaniline and 2-chlorocyclohexanone "assuming that the extremely electron rich aminocyclohexanone intermediate" would favor cyclization (p. 440). Note that inferences like this are as a rule made simply through visual inspection of structural formulas, and not by means of material or computational models of molecular structure.

[22]  As one of the referee suggested, there may in fact be discrepancies among experimentally or computationally derived three-dimensional models of chemical species. In his/her own words: "The current method of representation of organic molecules (assuming sp3 hybridization and tetrahedrally directed bonds) is a considerable oversimplification (or just a fiction). R. A. Y. Jones (Physical and Mechanistic Organic Chemistry 2nd Edition, [Cambridge University Press], 1984, p. 118) and N. S. Isaacs (Physical Organic Chemistry 2nd Edition, Longman, 1995, pp. 28-29) give the modern picture for methane based on photo-electron spectroscopy. The four hydrogen atoms of methane can be regarded as being located at alternate vertexes of a cube. While this model is comprehensible for a simple molecule like methane, any attempt to apply this representation to long chain and branched molecules leads to massive confusion." (Emphasis in the original) Frequently, however, 3D molecular models derived by different methods turn out to be in substantial agreement with each other. That is why I am willing to concede that spatial representations of molecules are "considerab[ly] oversimplifi[ed]", but reluctant to think of them as "fiction".

[23]  Other subtle phenomena may be discovered in the future by spectroscopic analysis, but an undefinable number of them may remain beyond our knowledge. Our human nature, and the properties of the world around us, may preclude all chances of constructing instruments capable of revealing such phenomena. This, too, might be thought of as a (practical) limit to chemical knowledge.

[24]  Interestingly, in Schummer’s opinion, the "anti-scientific movement today is […] the direct result of frustrated hopes that were seeded in a quasi-religious manner by the former priests of science. Not surprisingly, chemistry is the main target of attacks because of its technological promises based on apparently perfect knowledge […]. A similar kind of naivety is [...] also deeply rooted in the chemical mind when taking, in the sense of philosophically essentialism, molecular structure as the essence of a chemical substance. That view pretends or makes other believe that we would know everything about a chemical substance, if we know only the geometrical data of the molecular structure exactly. If it turns out then that the substance, say a simple fluorochlorohydrocarbon, has an ozone depletion potential, people are inclined to think that chemists did only hastily, hence irresponsibly, determine the geometry. What can chemists reply? I am afraid they must admit that they know only very little about a chemical substance via its geometrical molecular structure, in particular outside the lab." (2000, personal communication).

[25]  For an example regarding synthetic organic chemistry see Stork et al. 1977: the authors admit that the different behavior of two strictly similar a,b-epoxy ketones (cpds 4 and 18) under Wharton reaction conditions "may have to be ascribed to conformational problems in the transition state which are too subtle to interpret at this stage." (p. 7068).

[26]  Formation of covalent bonds (e.g. phosphorilation of a key protein residue) is also a common event that initiates cellular processes.

[27]  This is a classical example of the so-called "rigid analogue approach", a technique largely employed in medicinal chemistry research (Cf. Hart et al. 1996).

[28]  Constructing good 3D models of proteins has become especially urgent after the mapping of the human genome. Presently, the aminoacid sequence is known of a huge number of proteins codified in human DNA, many of which may hopefully represent novel targets for pharmaceutical research. To test this possibility, however, it will first of all be necessary to establish the biochemical and physiological functions of such proteins. Deriving their tertiary structure could subsequently facilitate one of the fundamental steps in the laborious process leading to a new medicine, namely, the discovery of a molecule able to bind to the target protein with good affinity.

[29]  Plausible 3D models of proteins of biological interest can currently be constructed with a sophisticated technique, called homology modeling. For a review, see Hibert 1996.
 
 

References

Arecchi, F.T.; Arecchi, I.: 1990, I simboli e la realtà – Temi e metodi della scienza, Jaka Book, Milano.

Barrow, J.D.: 1999, Impossibility: The Limits of Science and the Science of Limits, Oxford UP, Oxford.

Benfey, T.: 2000, ‘Reflections on the Philosophy of Chemistry and a Rallying Call for Our Discipline’, Foundations of Chemistry, 2, 195-205.

Bersanelli, M.: 1997, ‘Un ambiente ospitale’, Tracce-Litterae Communionis, 3, 50-1.

Bersanelli, M.: 1998, ‘Terra incognita – L’insondabile Mistero è una realtà’, Tracce-Litterae Communionis, 2, 60-2.

Carreira, E.M.: 2002, ‘Il principio antropico’, La Civiltà Cattolica, I, 435-46.

Casti, J.L.; Karlqvist, A.: 1996, Boundaries and Barriers: On the Limits to Scientific Knowledge, Addison-Wesley, Reading.

Cubells, J.F.: 2000, ‘Principles of Pharmacogenetic Study Design: Lessons from Cocaine-Induced Paranoia’, Pharmaceutical News, 7, 39-46.

Del Re, G.: 1998, ‘Ontological Status of Molecular Structure’, HYLE – International Journal for Philosophy of Chemistry, 4, 81-103.

Giussani, L.: 1997, The Religious Sense, McGill-Queen’s UP, Montreal & Kingston-London-Buffalo.

Gleick, J.: 1987, Chaos, Viking Penguin, New York.

Hart, P.A.; Rich, D.H.: 1996, ‘Stereochemical Aspects of Drug Action I: Conformational Restriction, Steric Hindrance and Hydrophobic Collapse’, in: C.G. Wermuth (ed.), The Practice of Medicinal Chemistry, Academic Press, London, pp. 393-412.

Hartle, J.B.: 1996, ‘Scientific knowledge from the Perspective of Quantum Cosmology’, in: Casti, J.L.; Karlqvist, A. (eds.), Boundaries and Barriers: On the Limits to Scientific Knowledge, Addison-Wesley, Reading, pp. 116-147.

Hibert, M.F.: 1996, ‘Protein Homology Modelling and Drug Discovery’, in: C.G. Wermuth (ed.), The Practice of Medicinal Chemistry, Academic Press, London, pp. 523-46.

Hoffmann, R.: 1995, The Same and Not the Same, Columbia UP, New York.

Hoflack, J.; Hibert, M.; Trumpp-Kallmeyer, S.: 1993, ‘Molecular Modeling of Membrane Receptors: from Experiment to Experiment’, in: Biologia Molecolare dei Recettori e Drug Design – X Seminario Nazionale per Dottorandi in Scienze Farmaceutiche, centrostampa dell’Università di Urbino, pp. 91-127.

Huffman, J.W.; Lu, J.; Dai, D.; Kitaygorodskiy, A.; Wiley, J.L.; Martin, B.R.: 2000, ‘Synthesis and Pharmacology of a Hybrid Cannabinoid’, Bioorganic & Medicinal Chemistry, 8, 439-47.

Laszlo, P.: 1998, ‘Chemical Analysis as Dematerialization’, HYLE – International Journal for Philosophy of Chemistry, 4, 29-38.

Lee, G.C.M.; Syage, E.T.; Harcourt, D.A., Holmes, J.M; Garst, M.E.: 1991, ‘Singlet Oxygen Oxidation of Substituted Furans to 5-Hydroxy-2(5H)-furanone’, Journal of Organic Chemistry, 56, 7007-14.

Mosini, V.: 1994, ‘Some considerations on the reducibility of chemistry to physics’, Epistemologia, XVII, 205-24.

Mosini, V.: 1995, ‘Chemistry and the Primacy of Physics’, in: Janich, P.; Psarros, N. (eds.), Die Sprache der Chemie, Königshausen & Neumann, Würzburg, pp. 177-85.

Pazzaglia, M.: 1991, Letteratura Italiana – Testi e critica con lineamenti di storia letteraria, vol. 2, Zanichelli, Bologna.

Scerri, E.R.: 2000, ‘The Failure of Reduction and How to Resist Disunity of the Sciences in the Context of Chemical Education’, Science & Education, 9, 405-25.

Schummer, J.: 1997, ‘Challenging Standard Distinctions between Science and Technology: The Case of Preparative Chemistry’, HYLE – International Journal for Philosophy of Chemistry, 3, 81-94.

Schummer, J.: 1998a, ‘The Chemical Core of Chemistry I: A Conceptual Approach’, HYLE – International Journal for Philosophy of Chemistry, 4, 129-62.

Schummer, J.: 1998b, ‘Epistemology of Material Properties’, paper given at the Twentieth World Congress of Philosophy, Boston, MA, USA, 10-15 August 1998 [http://www.bu.edu/wcp/Papers/TKno/TKnoSchu.htm].

Simmerling, C.; Strockbine, B.; Roiter, A.E.: 2002, ‘All-Atom Structure Prediction and Folding Simulations of a Stable Protein’, Journal of the American Chemical Society, 124, 11258-9.

Solov’ev, J.I.: 1971, Evoljucija Osnovnyh Teoreticeskih Problem Himii, Izdatel’stvo Nauka, Moscow (page quotations according to the Italian edition: L’evoluzione del pensiero chimico dal ‘600 ai nostri giorni, trans. by A. Quilico, Mondadori, Milano, 1976).

Sternbach, L.H.:1979, ‘The Benzodiazepine Story’, Journal of Medicinal Chemistry, 22, 1-7.

Stork, G.; Williard, P.G.: 1977, ‘Five- and Six-Membered-Ring Formation from Olefinic a,b-Epoxy Ketones and Hydrazine’, Journal of the American Chemical Society, 99, 7067-8.

Strumia, A.: 1992, Introduzione alla filosofia delle scienze, Ed. Studio Domenicano, Bologna.

Tontini, A.: 1999, ‘Developmental Aspects of Contemporary Chemistry – Some Philosophical Reflections’, HYLE – International Journal for Philosophy of Chemistry, 5, 57-76.

Tramontano, A.: 1993, ‘Dai Geni alle Proteine: Disegno di Strutture Proteiche’, in: Biologia Molecolare dei Recettori e Drug Design – X Seminario Nazionale per Dottorandi in Scienze Farmaceutiche, centrostampa dell’Università di Urbino, pp. 39-88.

Trost, B.M.: 2002, ‘On Inventing Reactions for Atom Economy’, Accounts of Chemical Research, 35, 695-705.

Villani, G.: 1994, ‘Specificità della Chimica’, in: V. Mosini (ed.), Philosophers in the laboratory. Proceedings of the meeting Riflessioni Epistemologiche e Metodologiche sulla Chimica, EUROMA, Roma, pp. 163-180.

Vinti, C.: 1994, ‘Bachelard: ragione e realtà nella chimica contemporanea’, in: V. Mosini (ed.), Philosophers in the laboratory. Proceedings of the meeting Riflessioni Epistemologiche e Metodologiche sulla Chimica, EUROMA, Roma, pp. 163-80.


Andrea Tontini:
Istituto di Chimica Farmaceutica e Tossicologica, Università degli Studi di Urbino "Carlo Bo", Piazza del Rinascimento 6, I-61029 Urbino (PU), Italy; utinao@katamail.com

Copyright Ó 2004 by HYLE and Andrea Tontini