HYLE – An International Journal for the Philosophy of Chemistry, Vol. 5 (1999) No.2, pp. 79-115
Copyright Ó 1999 by HYLE and Jacopo Tomasi
 
 

Towards ‘chemical congruence’ of the models in theoretical chemistry

 

Jacopo Tomasi*

Abstract: A series of ‘growth crises’ in the methodological framework of chemistry has led to serious discrepancies between the operational approach used in experimental practice and the methods and models used in theory. The theory, based on the quantum version of microphysics, has met difficulties in giving to its concepts an operational status congruent with that of experimental chemistry. The process of redefinition is examined here, on the basis of an analysis of theoretical chemical models and on criteria to judge their congruence with this process of methodological harmonization.

Keywords: models in theoretical chemistry, theoretical analysis, methodological criteria for models, chemical concepts.


 

1. Why does the chemical way of thinking contributes so little to the analysis of science related problems?

Chemistry has a large impact on the economical aspects of our society, plays an important role in our everyday life, from health to pollution, and pervades our scientific activities in all fields of the study of the matter, from cosmology to the investigation of living bodies.

In spite of that, the chemical community plays a modest role in the development and critical examination of the themes of general interest. Chemical associations are mostly concerned with supporting the professional status of their members or defending the interests of producers of chemical goods, while only little space is reserved to groups engaged in issues of more general interest. The lack of activity of professional associations is not compensated by significant interventions of single chemists – with few exceptions, just sufficient to show (if there would be a need to demonstrate it) that the experience deriving from chemistry could be of great interest to general discussions.

In its multifarious occurrences within the fields of human interest, chemistry maintains a well-defined identity: there are no doubts, in general, to decide whether an activity, a procedure, or an approach belongs to the chemical realm or contains some chemical components. The concept of chemistry, in other words, is a ‘robust’ concept, surviving through the impact of external factors that call for various applications of chemical concepts and expertise.

An explanation of the well-defined identity of chemistry could be advanced by recalling the long tradition of chemistry as one of the few basic units in which science has been divided up. Actually this is not yet an explanation, because the partition of the body of scientific inquiry has changed during the centuries, some former basic units disappeared; others have been strongly modified both regarding name and definition. However, the fact that chemistry survives since at least three centuries as one of the main fields of science is a testimony of its robustness.

I shall not attempt to explain this robustness. I confine myself toconsider it a piece of empirical evidence, to be combined with another piece of empirical evidence: the important role chemistry plays in the realm of science and other human activities. Both are to be contrasted with the decisively small weight of the ‘chemical approach’ in the critical discussion of basic problems of knowledge, science and the other human intellectual and practical interests.

Of course, the reasons for this small impact of a ‘chemical way of thinking’ are quite complex. Many analyses could be performed that follow different approaches, support different aspects, and underline different interpretations. A synthesis could be achieved only by comparing the evidence presented by all studies. However, it is not my ambition to attempt a synthesis here, nor will I present a broad view of this subject, covering all the manifestations of human activities in which the weakness of chemical thinking is manifest. Instead, I will confine myself to a single aspect, in which I am professionally more involved and interested and of which I think it has had some impact on the situation we are lamenting. I mean the very core of the discipline, namely its methodological realization as a branch of science.

Every branch of science is characterized by a specific methodological framework that is often quite complex and composed by elements of different nature, such as rules, paradigms, concepts, which I do not dare define more precisely. Suffice it to remark that this methodological framework must be accompanied by a set of critical paradigms defining what is permitted (or accepted) by the methodological setting and what not. Some disciplines have a more rigid, others a less rigid critical framework.

In the last century, a relatively loose critical framework has characterized chemistry in my opinion. This situation is in sharp contrast with that experienced by chemists of previous generations. It seems to me convenient to follow an historical perspective that I will sketch below, denoting it a ‘partisan view’ to avoid continuos warnings that my exposition is extremely schematic and incomplete. The reality of complex processes as the development of a scientific discipline is by far more involved and more ambiguous than a short survey can take into account. In addition, I will stress some points and neglect others of comparable importance, because I will try to reach some conclusions for the future development of the specific branch of chemistry in which I am working: theoretical chemistry.
 
 

2. At the roots of the problem: a partisan view of the historical development of chemistry

Starting with the last part of the eighteenth century and continuing for all the nineteenth century, chemistry was, in the common opinion of educated people, ‘the Science’ par excellence. It was a paradigm for the other branches of science, a guide, the main guide indeed, to understand the essence of the material world as well as the main hope to improve the quality of life in industrialized countries.

The language of political speeches provides a good indication of this special status of chemistry, in particular, speeches and pamphlets of the French revolution and of the ensuing more abundant production of political texts. These texts reveal that the borrowing of words and concepts from chemistry used for the progressively changing needs of the political debate largely exceeds the linguistic import from any other scientific discipline. Only around the beginning of the twentieth century, the political language shifts from chemistry to other disciplines in search for new metaphors or words [1].

The change of the political jargon is surely not related to a lessening of the impact of chemistry on economical activities. In the period of reduction of the flow of chemical terms to the political language, economy blossomed into the second industrial revolution, which was to a large extent based on chemistry. A reason could be identified in the development of other disciplines, or better, in the evolution of the mutual conceptual relationships among the various branches of science. That seems to me the essential point.

Before, chemistry has been able to develop an admirable body of methodological concepts, well grounded on chemical practice and almost self-sufficient, on which the development of the discipline was based. The methodological framework gave no space to ‘metaphysical’ concepts. Actually, it was instrumental to rid the discipline of the burden of concepts vaguely defined in a more distant past. Moreover, the methodological approach was very effective in promoting scientific progress.

The standard formulation of chemistry was ‘anti-metaphysical’ for about a century. Almost everything was reduced to macroscopic quantities, weights, volumes, temperatures etc., measurable with the standard laboratory equipment. Other aspects of the discipline were left in the background, to which lip service was paid (as the Democritean concept of atom) or which was reluctantly accepted as accessory concepts useful to expedite the formulation of research projects or the ensuing scientific report. The history of the long debates about benzene and the fierce resistance to Kekulé’s ideas are well known. Many other examples are easy to find, in particular, in the history of studies on isomerism, which were naturally inclined to use the concept of ‘structure’, viewed as spatial connections among atoms within a molecule, while in the standard formulation there was no space for the concept of molecular structure.
 

2.1 The impact of microphysical concepts on chemistry

A decisive attack to the standard formulation of chemistry came from physics. There is no need of resuming the successful progresses in our knowledge of the material world at the submicroscopic level at the turn of the past century. Concepts of microphysics rapidly passed from the realm of conceptual models to that of real things: electrons, atoms, nuclei, and molecules acquired the status of elements of the real world, directly accessible to measurements and to further inquiries about their properties and internal constitution.

The enormous enrichment of the human knowledge of the material world was accepted by the standard formulation of physics with little embarrassment. (Actually, the crisis of classical physics had its roots in these discoveries, but the standard methodological framework of physics was not destroyed by the addition of new elements at the submicroscopic level).

In contrast, the embarrassment was evident for the standard formulation of chemistry. There is no need, here again, of resuming debates and sharp discussions among chemists occurring at that time: it is sufficient to recall the name of Wilhelm Ostwald and the sharp defense of traditional methodologies of chemistry he personified. We also remark that in the same years the standard formulation of chemistry received a stricter epistemological basis under the influence of Mach.

There are good reasons why the Ostwald battle was rapidly lost. To have real molecules composed of real atoms, in turn composed of real nuclei and electrons, was appealing to chemists working on the synthesis of new substances (there was a large progress in the field, see for example the dyes). They could exploit the new concepts to rationalize experimental evidence about chemical reactions and equilibria (there were new techniques available), such that there was little hesitation to abandon the formally rigorous standard chemical approach. In fact, a strict application of the traditional approach impeded the use of new conceptual instruments for lack of a well-defined logical status therein.

In my opinion, that was the first important revolution (in the Kuhnian meaning) occurring in chemistry since its scientific foundation about 150 years before. It corresponds to the beginning of physical chemistry – the two facts are strictly related – and it also corresponds to a rapid evolution of the abilities and efficiency in the synthesis of new compounds. The use of new conceptual (and experimental) tools, based on the recent advances in microphysics and statistical physics, enabled further expansion of chemistry both in the large-scale production of basic goods and in formulation of new classes of compounds having direct impact on everyday life.

Undoubtedly, it was a very successful revolution, as regards the relative importance of chemistry in the economic structure of our society. However, it was also a revolution that deprived chemistry of a specific conceptual and theoretically critical ‘hard core’. Every scientific discipline requires a conceptual hard core that defines the specific way the discipline looks at problems and suggests solutions. Not only is the formulation of a conceptual hard core essential in the initial stages of the birth and growth of a new discipline (the history of science gives us clear examples of it), it also continues to be extremely important later when the rapid evolution requires a continuous updating and revision of its conceptual identity. Otherwise, there is stagnation, rapid decrease of status, and eventually conceptual death.

Let us examine the critical hard core of chemistry after the ‘revolution’. It is of hybrid nature. The older methodological procedures have been preserved (they are still in use in the current practice) and combined with a new way of considering the material composition of substances and their transformations. The ‘new way’ derives from physics and continues to share with physics the language, the methodological approach, and the general theoretical framework within which further advances and new analyses are to be inserted. This synthesis of two different scientific approaches, quite efficient in practice, lacks the incisiveness of more focussed methodological formulations. As a result, the specific and peculiar way of expressing the chemical point of view on general problems has been lost.

Physics maintained an incisive profile, in spite of the fact that it experienced a crisis at the beginning of the century, originated by the same developments in microphysics that led to the revolution in chemistry. Quantum theory took some time to exert its profound influence, but immediately after the first quarter of the century, a second formulation of quantum theory caused another Kuhnian revolution, this time in physics.

During the first quarter of the century, quantum theories had little influence on chemistry, which however developed a new branch, theoretical chemistry, as a descendant of the recently born physical chemistry. Both were mostly based on a microphysical approach, with emphasis on the ‘reality’ of concepts and models used in their investigations. The leading role was undoubtedly led by physical chemistry, with theoretical chemistry playing a secondary role.
 

2.2 Quantum chemistry: a first revolution in theoretical chemistry

During the first decades of the century, the subjects and the methods of inquiry in theoretical chemistry were quite different from those used in previous theoretical analyses in chemistry. Even if rich of appreciable results, theoretical chemistry suffered from the lack of theoretical incisiveness we have already lamented and which had been one of the strong points of the former theoretical considerations on chemistry. Instead, the logical structure of this first version of theoretical chemistry was well suited to immediately incorporate and to exploit the second formulation of quantum theory. This process may be considered as a second revolution in theoretical chemistry, and for the following thirty years, the paradigms of theoretical research in chemistry remained unchanged.

The quantum vision of the chemical properties of molecules, on which more details will be given later, did not succeed in changing the main trend of chemistry in defining and solving problems. That was a very unfortunate situation. Theoretical chemistry proposed a new approach to the coordination of efforts in the development of the discipline that was considered with skepticism by the majority of chemists. In fact, they considered the quantum theoretical approach to be too physical and too mathematical to have substantial impact on real problems of chemistry.

It is worthwhile to quote the opening sentence of a famous textbook on quantum chemistry, written in these years (Eyring et al. 1944): "In so far quantum mechanics is correct, chemical questions are problems in applied mathematics." Quantum mechanics was, and still is, correct, at least at the level of energies of chemical interest, but chemists fiercely rejected the prospective reduction to applied mathematicians. The radical position expressed in the Eyring et al. sentence was accepted in the chemical community as a slogan defining the activity of quantum chemists, and thus only loosely related to the immediate interests of chemistry.

Actually, this sentence does not faithfully reflect the activities of several theoreticians working in the years 1930-1960. Another famous sentence (Dirac 1929) better reflects a different approach. It deserves to be fully quoted:

The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations too much complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed which can lead to an explanation of the main features of complex atomic systems without too much computation. The first part of the quotation is often considered the source of the opening statement of Eyring et al. The second part, often omitted in quoting Dirac’s words, is of greater interest here. We turn our attention to a research program indicated by the keywords ‘explanation’ and ‘approximate practical methods’. That research program was actually followed only by a limited number of scientists. Reference to a single name, Linus Pauling, and his famous book The Nature of the Chemical Bond (Pauling 1939), is sufficient to indicate this approach.

Approximate practical methods were elaborated, and applied with remarkable ingenuity; explanations of a considerable number of chemical problems were formulated and accompanied by several ‘chemical concepts’ of general applicability. A new theoretical formulation of chemistry began that fully incorporated the microphysical description of matter and, by further conceptual development, became more acceptable to the majority of chemists.

Eventually, the new approach has in fact been acknowledged by chemists, but at slow pace, bit by bit, and not leading to a revolution. The reason is again due, in my opinion, to the coexistence of two positions within the conceptual framework of chemistry, one of which resisting to any attempt of further enlarging the role of formal physical and mathematical tools.

I take a break here in my partisan exposition of the conceptual development of chemistry, because the slow progression of the approximate and explicative quantum theoretical approach was perturbed by what I consider a second revolution in theoretical chemistry.
 

3. The second revolution in theoretical chemistry

The theoretical approach to chemistry according to the Pauling style continued after the Second World War at the slow pace I have already stressed. A sizeable part of the increasing number of young people working on theoretical chemistry were not quite satisfied with this methodological formulation. Chemistry is the science of subtle differences among similar material systems. There is no need of giving an extensive list of examples: a minor change in the chemical groups of a molecule may lead to important changes in the properties (methanol is under important aspects very different from ethanol); reactions are sensitive to minor changes in the surrounding medium. These are two examples of important chemical problems for which the approximate quantum methods were not able to give a definite explanation. At the same time experimental physical chemistry was challenging theory to give an interpretation of the wealth of experimental data provided by the advent of new techniques.

These were reasons for justifying the desire to return to the rigorous application of quantum mechanics, applied now to molecular systems of direct chemical interest instead of the former simple prototypes, such as the hydrogen molecule. However, it was very difficult to put into action the modified research program. The problems of applied mathematics were tremendous, and we may now, in retrospect, consider this program as without any reasonable hope of success.

Things changed suddenly, thanks to developments in a very distant branch of science: electronic engineering. The advent of electronic computers opened the possibility of applying (and of developing) the kind of applied mathematics necessary for chemistry. Theoretical chemistry had the technical skill necessary to use the new facilities with competence and efficacy. Almost immediately quantum chemists became the most important university users of the available electronic computers. The need for computer facilities grew so fast that the quantum chemistry community strongly and repeatedly asked for the availability of governmental computers (Army, etc.) to academic scientists.

The technical work done in a limited number of years is impressive. A large part of numerical mathematics was reworked and adapted to computers. Quantum chemists developed the applied mathematics necessary to treat complex atomic systems, with considerable expansion and deepening of the underlying mathematical formulation of the quantum theory. In short, quantum chemists took seriously the opportunity given by computers. Too enthusiastically, perhaps.

By taking the 1959 Boulder Conference on Molecular Quantum Mechanics, we may fix a conventional date to mark the passage of theoretical chemistry to a new stage – a passage corresponding to a revolution and to a crisis in the previous formulation of quantum chemistry. The closing speech at the Boulder conference, given by C.A. Coulson (1960), reflects the impending crisis and the preoccupation of the more experienced scientists. Coulson expressed the view that the new era of quantum chemistry, so bright of exciting promises, would lead to a splitting of the discipline into three separate domains, each having its own set of paradigms and little mutual interaction and cross-fertilization. It is interesting to regard some details of this significant speech.

The exponents of group I, in Coulson’s words, were committed to "in-depth computing" and "prepared to abandon all the chemical concepts and simple pictorial quality in their results", "in order to achieve complete accuracy". "The exponents of group II [again in Coulson’s words] argue that chemistry is an experimental subject, whose results are built into a pattern around quite elementary concepts. The role of quantum chemistry is to understand these concepts and show what are the essential features in chemical behavior". Coulson gave some example of such simple chemical concepts: bonds, orbitals, hybridization, ionic character, dispersion forces; we may say that these were all the chemical concepts developed in the preceding 30 years. Group III people were concerned with the "spreading on quantum chemistry into biology": "group I exponents will throw up their hands in horror at such attempts", "even group II members will mistrust the complete neglect of many terms which are known to be large". Coulson added: "In this field the prizes are immense – no less than the understanding and control of life itself. The future here may be far off."

In spite of the relaxed style adopted by Coulson the worries are evident: advances within the theoretical/computational framework, which bore the potential to unify such a large part of studies on molecular systems, were on the verge of splitting the field into separated domains, thus losing the unifying potential. Coulson even suspected that the Boulder conference would be the last general conference on molecular quantum mechanics.

A few years ago, I summarized Coulson’s speech (Tomasi 1996a), closing the summary with an optimistic remark. In fact, the separation into three distinct disciplines did not happen, and the links among the groups have been reinforced. While I substantially confirm this optimistic view, it is important to note that things have not been so simple, and they continue to be not so simple.

The increased use of electronic computers after 1960 undoubtedly lowered the status of theoretical analyses of chemical phenomena done according to the old style. The progression in the size of the molecular systems subjected to ‘ab initio’ quantum mechanical scrutiny has been accompanied by the discovery (or the claim) that approximations introduced in earlier applications were incorrect.

An example can be drawn from the Molecular Orbital theory. In the period 1930-1960, this theory represented the alternative to the Valence Bond theory (used by Pauling and others) to rationalize chemical properties. The MO version of greater practical success, elaborated by Hückel in 1931, considers only p orbitals (or electrons) in conjugated molecules. This drastic simplification was qualitatively warranted by general claims on the nature of p electrons (more mobile than the electrons belonging to the underlying s skeleton) and on their orbital energies, supposed to be at the top of the energy range of the occupied orbitals. Higher mobility and easier excitation of p electrons was invoked to justify the reduction of chemical problems to this subset of orbitals alone. However, the first ab initio calculations of realistic conjugated systems showed that both claims were incorrect. The s electrons display the same mobility as the p electrons (or even higher) and there is a mixed ordering of s and p orbital energies in the set of the highest occupied orbitals.

Other examples may be drawn from the Valence Bond theory. First, it was immediately realized that Valence Bond theory was not well suited for accurate studies using electronic computers (only in recent years we have had a revival of ab initio VB methods able to treat, with the use of new concepts, molecules of medium size at the due accuracy). Second, it turned out that the selection of a limited number of mesomeric structures, introduced on empirical grounds to explain chemical phenomena, did not correspond to results of more refined calculations in most cases. Since quantum computational chemistry was not able to reproduce the more important results of the previous quantum chemical analyses, it was casting doubt on the validity of many ‘chemical concepts’, such as the play of resonance and induction effects and the whole machinery of fleeces organic chemists were willing to accept to explain electron transfer in reaction mechanisms.
 

3.1 The evolution of theoretical chemistry after 1960

The years after 1960 were a period of enthusiasm and of confusion at the same time. Some ‘computationists’ followed the way predicted by Coulson, by defining microcosms (or subdisciplines) with specific paradigms and fields of interest, having little to share with the general body of chemistry. Examples are easy to find: we confine ourselves to remark that often they were, and still are, related to some specific experimental fields which provide a wealth of data requiring interpretation and offering a challenge for numerical reproduction, such as atomic spectroscopy and two- and three-body atomic and molecular beam collisions.

Some members of the second group continued their work without paying attention to the evolution of ab initio quantum chemistry (thus benefiting, in some sense, from the increased collapse of the structure of critical paradigms). Others took a more positive position elaborating, and exploiting, a weak version of the quantum mechanical ab initio methods, the so-called semiempirical MO approach (despised by the majority of the ab-initionists).

The winning position was lead by several of the most prominent theoreticians of that time, such as, for example, Robert Mulliken. They directed their efforts, and the efforts of the young enthusiasts working under their guide, both to the development of new computational techniques and to the redefinition of chemical concepts of the preceding period within the new framework. (There was also remarkable progress in defining new concepts.)

It took some time before the results of this activity had a remarkable impact on the chemical community. One reason is strictly related to the institutional organization of the research. The majority of groups working in this field tacitly adopted the pragmatic view of using what was available ‘at home’ on the computational front to develop new methods of interpretation. This pragmatism corresponds to the old ‘chemical style’ of working only on concrete things (in the present case, numbers obtained by the mathematical procedures available in each group), and of avoiding generalizations not supported by direct evidence. This concreteness is one of the traditional strong points of chemistry, but it has been accompanied by a lack of public and collective discussion about the methodological guidelines to be adopted for this activity. Each group had to elaborate their own guidelines and methodological paradigms. That situation, which I personally experienced, made it difficult to derive from studies on specific problems general methods of interpretation and analysis.
 

3.2 Interpretation and computation

Progress in the development of interpretative methods for quantum chemistry was not guided by few leading laboratories (as it happened in physics), but rather the result of collective work to which many groups contributed. That situation led to a replication of efforts and to a large number of dead-end initiatives; but it also greatly enlarged the basin from which new ideas about methods (and new concepts) have been derived since. During the last ten years, we have been harvesting the positive results of this way of proceeding, which has also greatly contributed to the present spreading of chemical computations in almost all the fields of chemical research. In spite of this success, I think that there is still the need of a collective elaboration of methodological guidelines for the prosecution of this activity. In fact, we cannot confine ourselves to the development of what we have done in the preceding 30 years. The evolution of the discipline presents new challenges that are calling for appropriate development of the theory, not only within the limits of what we are now able to do but also for other more complex problems. This request of a collective work for the future is not in contrast with the pragmatic approach followed until now and that I have positively commented on: we have now a large background of theoretical work to start with, following again the sound chemical practice of being concrete and of avoiding uncontrollable lucubration.

After 1960, something similar has been achieved in the companion field of quantum mechanical calculations. The rapidly accumulating empirical evidence for the quality of ab initio calculations prompted critical reviews, setting forth guidelines for the field of molecular calculations. This has been of great benefit for the evolution of these "approximate practical methods" (to quote again Dirac’s words): we need to do the same for the "interpretation", the second key concept in the Dirac sentence.

To give a contribution to that program, I will expose the guidelines I have elaborated in my past activity, dating back to the first sixties and followed during the years, with some coherence.
 
 

4. A sketch of a formal definition of theoretical analysis in the field of chemistry

A first point I consider essential in initiating a specific research activity is to state explicitly what the scientist (or the research team) is doing: motivations, methods and expected results [2].

The adjective ‘explicit’ requires some comments. The scientific activity in public scientific institutions is addressed, prima facie, to the production of scientific publications. Research in industries or other private institutions has sometimes limitations on the publication in scientific journals, but also in these cases the scientific activity aims at written reports. In all cases, there are finally written documents to be read by other interested people. We shall briefly consider later the ideal structure of these documents. Here, I remark that rarely such documents give explicit and detailed methodological statements. With the adjective ‘explicit’ I mean that the researchers have to define for themselves in a detailed and accurate way what they are doing: this initial step is even more necessary when other people are involved in the research project, especially at universities where it should be an important step of the training. It is convenient to start the effort of clarification with a basic element of each research: the model.
 

4.1 A typology of models

Theoretical chemistry works, by definition, on models. Actually, models are used in all scientific disciplines, including the more practically oriented fields of chemistry. The classification of models in use in our group is rather general, but I have not extensively checked how it works in other scientific fields. I resume here the essential points of the elaboration of this concept I did more than 30 years ago, without any help from methodological studies (on which I am not expert). Probably some points are naïve, and others may be better expressed and developed by specialists in this field. However, in recent times I have found sort of confirmation by colleagues who expressed similar ideas (Trindle 1984, Maksic 1991) and in a book by Mario Bunge (Bunge 1985), the only one I have read on this specific subject. In this Section and the following, I give a synopsis of what I wrote in 1988 (Tomasi 1988) based on a 1983 conference: details and more examples may be found in that paper and in another reference (Tomasi 1996a).

A model, according to the meaning this word has in science, is by definition incomplete with respect to an empirical reference that we shall call the ‘referent’. In theoretical chemistry, the referent is often a complex system, but a wider definition of it may be accepted. The models can be classified as ‘material’ and ‘abstract’. Both material and abstract models can be in turn classified as ‘iconic’, ‘analogic’, and ‘symbolic’. The taxonomy of models we shall use in the present context is reduced to this double classification. In actual applications, there is often a hierarchy of models: the scientific research is often based on models of models, and additional models are used to elucidate some specific points.

Material models have different applications and are clustered differently in different fields of human activity subjected to scientific scrutiny. Architecture, engineering, physics are examples of scientific fields in which large use of material models was and still is made. The famous collection of plan-reliefs displayed in the Musée de l’Armée in Paris is actually a collection of iconic material models elaborated and used for an old version of the science of military engineering. Cars and airplanes used in wind chambers are material analogic models used to define better the aerodynamic profile of such artifacts. Distribution and flow problems as those of electricity in the distribution net, of water in rivers, channels, and pipes have been often checked with the help of material models, of analogical or symbolic nature. It would be long to give examples covering a larger number of disciplines.

In chemistry, that interests us more, there is a large use of material models. A complex chemical event can be modeled by reducing the complexity of the chemical system. The field of research on collisions in atomic and molecular beams I mentioned before has been originally conceived as a material model of more complex events of basic interest in chemistry. The study of a chemical reaction of biochemical interest performed in vitro under controlled conditions is a model of something happening in a living body or, to be more precise, in a cell, within one of its subunits, at close contact with one of its molecular substructures. Actually, cell, cellular substructure, local arrangement of proteins, or what else one considers necessary, can be viewed as a sequence of material models.

There is also a need of material models in theoretical chemistry. The reduction of the complexity of material systems we have above introduced for experimental chemistry is largely used in theoretical chemistry too. There is no need of spending more words on this application of models, which is currently used in the computational practice.

The material models in use in theoretical chemistry are not limited to chemical systems: the first idea of this classification occurred to me when working for my thesis on the construction and use of an analogic computer composed by a net of coupled electric oscillators to study the properties of overtone frequencies of molecular vibrations: this is a different type of material model.

The constant drive of all disciplines to a larger formal mathematization stimulates the use of abstract models. There are few material models, I think, in economy, social sciences, epidemiology, etc., but in such disciplines a large use is made of symbolic models with a large mathematical contribution. Also in theoretical chemistry, abstract models are more largely employed than material ones. There are iconic, analogic, and symbolic abstract models as among the material ones. With the following few examples, we also try to give a more detailed definition of the three categories.

Iconic models are based on the similarity in form with the referent. The lock-and-key model of enzymology is a symbolic abstract model: its realization in terms of material balls and sticks is a material iconic model. (We may add now its images on a computer screen as another iconic realization of that model: this possibility did not occur to me when computer graphics were not so developed. Are the images on a screen or computer print to be considered material or symbolic models? I have no definite opinion on that point).

Analogic models preserve some aspects of the form of the referent, but give emphasis to the functional aspects (or behavior) of it. A mass-and-string molecular model is the material realization of an analogic model to study vibrations, and the corresponding molecular mechanics model is its abstract counterpart. I believe that many models used in quantum chemistry have the status of abstract analogical models.

Symbolic models neglect the analogy of form with the referent and solely rely on the analogy of function. This is the realm of mathematical models (from quantum mechanics to thermodynamics), but the existence of non-mathematical symbolic models cannot be neglected. An example is given by the periodic table of the elements. The Kekulé model of benzene was also a non-mathematical symbolic model.

The last remark prompts us to consider that many models in chemistry and theoretical chemistry are based on molecules. The ‘partisan’ view of the historical development of chemistry presented above shows that there has been a large evolution of the logical status of this concept. Hence, we may accept the idea that chemical models have changed their taxonomic definitions during the years. Molecules started with the status of abstract non-mathematical symbolic models, later they reach the status of iconic models, to eventually jump from the world of models to that of real objects. A ball-and-stick structure cannot be considered as a model of a model any more, but it is now an iconic model of a real referent.

The same happened to the quantum mechanical description of the molecular wavefunction and the corresponding density function: formerly abstract and symbolic models (of mathematical nature), these are today considered analogic models (often translated with the aid of computers into iconic models). Moreover, some people working in this field argue that what we actually ‘see’ of the molecular and submolecular structure (including the detailed point-by-point distribution of electronic and nuclear charges) is a direct manifestation of the quantum structure; thus, only the wavefunction should be considered the ‘real’ description of the system, other descriptions being ‘mere’ models. We shall not indulge these considerations that would bring us to a slippery field in which it is easy to lose control and to inadvertently destroy the methodological structure of chemistry. It is sufficient for us to keep to the analogical definition of molecular models and see what it means for the models of larger use in theoretical chemistry.
 
 

5. The basic structure of models in theoretical chemistry

We have found it convenient to divide models used in theoretical investigations in chemistry into four components:
  1. the material model
  2. the physical model
  3. the mathematical model
  4. the interpretative model
The ‘material model’ states the material composition of the model subjected to study. It may correspond to the actual portion of matter in which a given phenomenon is observed, or to a reduction or simplification of it. Let us take as an example the study of a chemical reaction in solution. The material model may be limited to the molecules corresponding to the nominal stoichiometric relation, thus implicitly assuming that their encounter is sufficient to describe the reaction. It may be further reduced by replacing the reactants with simpler molecules, introducing further assumption about the relevance of some atomic groups in the reaction. Conversely, the material composition may be enlarged by including an increasing number of solvent molecules, other bodies acting as catalysts, etc. Note that nothing is said here about the level of description of the material model, which may have different degrees of analogy with the referent.

The ‘physical model’ states the physical interactions included in our study. These may include interactions among components of the material model only, or, more extensively, also interactions with the exterior. There is a large variety of options of defining the physical model. In the majority of applications, the physical model is limited to the electrostatic forces acting among electrons and nuclei, but the extension to external electric, magnetic, or electromagnetic field is also common. Many other terms can be introduced into the model: spin-orbit or nuclear quadrupole coupling elements, to give a couple of examples, but many others could be mentioned as well, especially for the calculation of particular properties. There is now an increasing confidence in the theoretical-computational approach leading to the consideration of electro-weak as well as of gravitational interactions that were not considered in earlier studies by tradition. The physical model also states if a time-independent or a time-dependent formulation has been adopted.

It could be said that the physical model corresponds to what has been included in the Hamiltonian of the model. That is true, but we have to consider that many theoretical applications are based on extremely simplified formulation of the basic physics not requiring an explicit formulation of the Hamiltonian. We have to keep the field of applicability of this partition scheme large: one positive point of the more recent evolution of theoretical chemistry is that there is space, with the same dignity, for highbrow quantum formulations as well as for classical formulations.

The ‘mathematical model’ includes all aspects pertaining to the description of physical interactions in the given material model. The mathematical model assesses the quality of the quantum mechanical description (Hartree-Fock, post-Hartree-Fock, Density Functional Theory, effective Hubbard or Hückel descriptions, etc.) as well as the basis set, the way of computing matrix elements, and so on. The continuous extension of these theoretical models to systems of large complexity calls for continuous extensions of the mathematical tools used in the model: a well known example is provided by presently widespread computer simulations that require the use of topics of applied mathematics not used for the study of simpler systems. Further progresses are calling for the introduction of other mathematical methods not used yet in theoretical chemistry, and for the development of new mathematical methods. These considerations come out naturally when one considers the model of theoretical chemistry under this viewpoint.

The ‘interpretative model’ collects all the aspects of the study that are used for the interpretation of the application of the mathematical model to the material model, according to the prescriptions of the physical model. That should be the very core of the theoretical study, and it is the realm of the ‘chemical concepts’ according to Coulson’s definition.

The partition of models for theoretical chemistry is of help when judging models, as experience has shown us, and it may be used also to improve the models. To do that one has first to consider the goal of the research performed with that model. The objectives of theoretical studies can be divided, grosso modo, into three categories:

  1. studies addressed to the calculation of values of properties (physical observables) of the material system;
  2. studies addressed to the interpretation of chemical or physical phenomena;
  3. studies addressed to the development of specific research tools, in particular for the interpretative model.
Some studies of the first kind aim at highest accuracy. These are the descendants of the ‘in-depth computing’ of forty years ago. Today, there is no more need of showing by numerical examples that an accurate application of quantum mechanics works. However, there are problems requiring the application of high computational levels to certain observables. The most important ones are methodological problems regarding the mathematical model. New formulations of parts of the mathematical apparatus, now quite formidable also in standard applications, require the use of benchmarks both for their accuracy and for their efficiency. There are other problems requiring the use of accurate calculations, for example the development of methods for some physical aspects not yet completely defined, e.g. relativistic effects. Here again, methodological motivations are dominant, and they involve both the physical and the mathematical aspects of the model.

Other studies of type (a), quite numerous in the literature, are performed at a mathematical level of medium-low quality. They should be limited to cases when there is a need of an approximate value of an observable that can be determined faster and at lower cost than by experiment. Since there is an increasing demand for such approximate values of a large number of systems, often of complex composition, that approach makes sense. However, the practical goal of that type of applications must be explicitly recognized, and the model should meet some specific criteria, in this case mainly regarding the mathematical model. We shall examine later some general criteria for models.

Studies of type (b) should represent the main subject of theoretical chemistry. Here the interpretative model plays an important role. While the development of interpretative tools belongs to type (c), the resultant interpretative models are just applied here. However, actual scientific inquiry is a complex task that cannot be reduced to the simple application of a model, even if it is well conceived. In general, it requires the application of a model followed by a critical examination of the results (i.e., what the model has produced), and then, if there is no fully satisfying explanation of the phenomenon, the development (or choice) of another model, in a sequence of steps.

It is convenient to introduce here a partition of the course of theoretical studies of type (b).

1) The report. First, one should clearly state the research objective. Then, from a collection of the outcome of the model, one must make a selection according to the possible evidence (or counter-evidence) concerning the scope of interest. Almost all theoretical methods now in use produce a huge quantity of numerical data from among which one has to select the relevant numerical evidence provided by the model. In some cases, the report is sufficient to reject a model (e.g. for reasons due to its mathematical component) but usually it provides the material for the following step.

2) The interpretation (or description). The aspects of the phenomenon to which the report gives evidence are related to a set of ‘chemical concepts’ that introduce a rationale into the empirical evidence. That is the field of the applications of ‘structural’ theories which are, by definition, not complete and do not exhaust the problem. They are often in competition with each other: a given phenomenon can be described in different ways, using different chemical concepts (or, in other words, invoking different ‘causes’). The competition among interpretative models is not a weak point of the procedure. By contrasting different interpretations, it may be easier to arrive at the completion of the research.

3) The explanation. The last step aims at a fuller comprehension of the phenomenon. Contrasting interpretations must find a synthesis here. Because one rarely finds a sound explanation by examining the description provided by a single model only, one needs to consider several models. It is now a customary practice to examine descriptions obtained by standard changes of the mathematical model: e.g., using ab initio quantum methods with a sequence of expansion basis sets of increasing complexity applied at different levels of the quantum theory (Hartree-Fock or one-determinant SCF, many determinant SCF, perturbation theory corrections, etc.). While this is a good practice, it can be insufficient to establish an explanation. Deeper changes in the model may be necessary, but it is difficult to put that in general rules. The definition of alternative models depends on the nature of the problem and must be left to the chemical insight and to the shrewdness of the researcher. Often the explanation is the result of a collective work, based on descriptions given by independent researchers on the basis of different models.

5.1 Criteria to jugde interpretative models

Theoretical studies of the third category (c) represent an important sector of theoretical activities. In the preceding 40 years, they had, and will continue to have in the next future, an important role in assessing the status of the theory in chemistry, and ultimately the methodological status of chemistry. We shall discuss chemical concepts, and strategies for improving them in the next section. Before so doing, it is necessary to consider some criteria to judge models. The following listing is surely incomplete, but it corresponds to those I have more frequently used. 1) Simplicity. The selection of the aspects of the referent for the model is a delicate task. The inclusion of unessential aspects of the referent makes the model obscure and reduces its significance. On the other hand, models that are too simple may lose important hidden features of the referent and are therefore of little use in the scientific research. Ad hoc assumptions must be avoided here. The balance between simplicity and completeness is the result of a continuous struggle in defining models, and that balance eventually measures the quality of the model. Related to simplicity is clarity: a good model should be easily describable, understandable, and applicable.

2) Self-consistency. A model must not be contradictory. Moreover, it should meet a more general criterion of coherence. In particular, models related to the realm of physical and chemical disciplines should not be in contradiction with the basic principles of science. Models that connect features of the referent (or of the model itself) in contradiction with some basic criteria, as for example dimensionality, must be considered with some suspicion. Good performances of models of this kind may be due to chance, and may obscure models of the same features that are more satisfactory.

3) Stability. A model should allow introducing changes or complements without destroying its internal structure. It should be possible to use a good simple model as starting point for a sequence of models of increasing complexity to obtain more accurate description of the referent (properties of the chemical systems, characteristics of the process). In other words, it must be robust.

4) Generality. A good model should allow to draw new connections between different referents not evident at first or not considered during the development of the model. This is an important feature of models in chemistry, where we may operate on a myriad of different chemical compounds without being easily able to define a priori (i.e. with the help of purely mental models) similarity in properties.

5) Usefulness. The effort spent in the development and use of the model should be rewarded. Depending on each case, the reward is given by accurate, or by reliable and useful estimates of properties of the referent (in other terms, congruence between model and referent is required); in other cases, by a sound interpretation of a property or phenomenon. Among the latter, models that reveal new aspects of the referent, not explicitly considered during the development of the model are particularly interesting. A successful model should produce ‘surprises’ in the course of its application. This is one of the ways to progress in science.

These general criteria apply to all models used in exact sciences, and in particular to those used in theoretical chemistry. In the classification of the models for theoretical chemistry presented above, an important role has been reserved to the fourth component, the interpretative model. Research on the development of interpretative models (which are, as we have seen, in general used as submodels, i.e. as components of a more complex model) must take into account these criteria, of course. However, there are further aspects of interpretative models to be considered that we address now.
 
 

6. Interpretative models and chemical concepts

Our effort of keeping the present analysis of the theoretical activities in chemistry at a general level becomes increasingly difficult when we pass to specific aspects of the problem. The partition of models for theoretical chemistry proposed in the previous section is fairly general but does not cover all the ways according to which scientific inquiry can be done and is actually done. In passing now to a specific section of this partition, the interpretative model, I am obliged to reduce further the field under scrutiny.

I shall confine myself to some considerations about interpretative models having as object (i.e. as referent) the structure and properties of a molecular system. We are still in the world of models, the real matter remains in the background, but I am introducing here a limitation within the models. It should be emphasized that theoretical chemistry uses interpretative models that do not have molecules or similar pieces of organized matter as referent. Here, I shall not develop a classification of these additional categories of models, often of more holistic nature: the subject deserves a separate analysis. This remark has been added simply to dispel the impression of neglecting an important part of recent theoretical activities in chemistry.

This said, I have also to remark that interpretative models having molecules or similar portions of organized matter as objects represent the main core of the theoretical activity in chemistry. There has been a progressive enlargement of the size of such portions of matter. Quantum chemistry started with material models limited to a single molecule (of small dimensions) and is able now to treat by far more complex material models, without limitations on the nature of the interactions about subunits. Even formally infinite systems, such as regular crystals, solutions, surfaces, etc., can be treated on the same footing as single molecules or small clusters of molecules. There are differences in technical details, of course, but that does not introduce essential differences.

The particular models we are going to consider now have a focal component, which may include the whole material model or a part of it, repeated identically or with some minor differences as in regular crystals or in solutions. The focal component, even when limited to a single molecule, is not a good starting point for interpretative models. Theoreticians require something simpler: in fact, the interpretation is given, at this level of the theory, as the result of an analysis. Hence, it is necessary first to dissect the focal component into submolecular units.

Formal quantum theory does not give us precise indications how to define these subunits; thus, the delicate preliminary step of the analysis is to some extent arbitrary. Different choices of the starting subunits can lead to different interpretations of a given phenomenon. As we have already said, interpretation is not univocal and competition among different interpretations is the main way of progress in theoretical chemistry.

Quantities without a correct formal definition can be called, according to Primas (Primas 1983), as graceless. We may accept such graceless definitions of the subunits, but they must satisfy the criteria about models introduced in the preceding section. This constraint greatly reduces the freedom each researcher has in defining his own way of performing the analysis. It must be added that the efforts of theoreticians have increasingly introduced precision into concepts originally defined in a graceless way, at the same time increasing their robustness (this word is taken again from the Primas book). I will give here some examples, drawn from the analysis and interpretation of molecular structures.
 

6.1 Definitions of basic subunits

There are three main lines of investigating the structural properties of molecules, depending on the choice of the basic subunits to be studied: atoms, molecular orbitals, and density matrices, the latter being subjected to appropriate reductions and manipulations.

At first glance, atoms are a very reasonable choice. They may be studied independently; the energy needed to assemble them in a molecule represent a small fraction of their total energy. On the other hand, the small amounts of energy corresponding to the building of molecular edifices are the source of the effects under examination. Therefore, intermediate steps in the build-up process are necessary, in particular the ‘valence prepared atomic states’ that constitute the real subunits of the analysis. Valence atomic states are undoubtedly graceless, without reference to physical observables, and their actual definition is open to large arbitrariness.

Atomic hybrid orbitals, one of the ‘concepts’ cited as example by Coulson, are important constituents of valence atomic states. The concept was central to VB theories in the preceding period of quantum chemistry, of which I have already pointed out some positive aspects as well as serious limitations. Atomic hybrids continue to represent an important tool of the theoretical analysis, because they have been recovered from another choice of basic subunits, the molecular orbitals (MO), which will be examined below. With this new formulation, the concept has gained in precision and, what is more important, in robustness. Valence atomic states may now be defined a posteriori with less arbitrariness (though they have little use today). Also VB theory has been reformulated by using MOs, and thus gained in robustness. After about thirty years of almost complete eclipse, it is now again a valuable mathematical model and a very useful instrument for analyzing calculations performed with other more efficient methods.

Molecular orbitals have their origin in a simplification of the basic quantum mechanical mathematical model based on an expansion of the total electronic wavefunction over one-electron functions (these are just the MOs). In the preceding period, simplified versions of the MO theory were the only available alternative to VB theory for the analysis of molecular structures. The indexes and the other tools derived from these simplified MO versions were of limited stability, however. In addition, MO properties (e.g. orbital energies) have a loose relation to physical observables, and only as a first approximation: limited grace and limited robustness.

Today, the MO approach represents the most used method to study chemical systems. Much work has been done to develop the potentialities of methods that use molecular orbitals as starting point for more accurate calculations; here we shall not consider these important achievements, which mostly regard the mathematical model. In parallel, MOs have been used in several different, new, and ingenious ways to develop interpretative tools. It would be interesting to present here an outline of these efforts and their successes and failures; but that would mean to summarize more than forty years of research. A large portion of the new ‘chemical concepts’ introduced in recent years derives from this activity. It may be said that the results of this activity represent a considerable part of the legacy of the present generation to the future development of quantum chemical theory. Here, we shall confine ourselves to one specific use of MOs that represents a bridge between the three basic choices of subunits we are here examining.

Molecular orbitals directly obtained from quantum calculations (called canonical MOs) respect the symmetry of the molecule and are therefore delocalized over the whole molecule. This feature is in contrast with many widely used chemical concepts, such as chemical bond, lone pair, and chemical group, which all have a local character. The whole molecular electronic distribution (that we shall later define as a reduced density function) is however invariant with respect to any unitary transformation of the original canonical orbital used in its definition. This allows a change of representation based on local molecular orbitals (called LOs) without introducing approximations. LOs can be defined in different ways, among which some, called intrinsic procedures, are not arbitrary but based on simple physical rule (as the maximum separation between single electron distributions). The passage to LO permits to recover the aforementioned chemical concepts, giving them the necessary quantification without ad hoc assumptions. By a further step it is possible to dissect each LO into atomic contributions, thus recovering, at the same degree of accuracy, other chemical concepts, such as atomic hybrids, bond polarity, the build-up of atoms into valence states, etc. The procedures give more precision and a more robust character to these concepts.

The satisfactory development of the model has a weak point, however. It is limited to descriptions of the molecular electron density expressed at the Hartree-Fock level (i.e. the sum of single electron distributions). To get more accuracy it is necessary to pass to post Hartree-Fock methods, introducing more correlation into the motion of electrons. At the correlated levels of the definition of the electronic distribution, a simple and clear decomposition into LOs is no more valid, however. Theoreticians did not lose heart and developed a more general definition of MOs, the natural orbitals (NO), which permits a recovery of the local descriptions previously introduced at the HF level only and gives additional robustness to the chemical concepts by describing molecular bonding within the LO formalism [3].

The density matrix theory provides a formal generalization of the wavefunction description of a quantum system. There is no need to consider here material and physical models for which this generalization is necessary. For the models with a focal component, what interests is the use of reduced density matrices, in which everything not strictly necessary to describe the system is eliminated with a mathematical average. The reduced density matrices interesting us are those of the first and the second order. The first describes the distribution of the basic particles (electrons and nuclei) the second also includes pair interactions of such particles.

To simplify the exposition it is convenient to introduce an approximation, acting on the mathematical and physical components of the model, according to which the positions of the nuclei are kept fixed and the quantum problem is reduced to that of the electrons. This approximation (the clamped nuclei or Born-Oppenheimer approximation), of general use in quantum chemistry, permits to focus the attention on the electronic component of the first order density matrix, also called electron distribution of the molecular system.

The electron distribution is a physical observable; it is convenient to examine its mathematical form. It is a scalar field, defined in the Euclidean 3D space: at every point r of this space, the electron density function re(r) assumes a specific value. This value (a scalar) can be experimentally measured and has the status of a physical observable.

To get from re(r) the subunits we are searching for one has to define, within the 3D space, volume elements corresponding to the single subunits. There are different ways of defining the surfaces limiting such volumes but we shall confine ourselves to the topological approach developed by Bader (1990). From the density function other functions are derived: the gradient Ñre(r) and the Laplacian Ñ2re(r). The first is a gradient field, the second is a tensorial field, and both are defined in the same 3D space as re(r). The vector gradient is perpendicular to the isodensity contourline at each point r: it is possible to define ‘gradient paths’, i.e. lines or trajectories connecting gradient values at a continuous succession of points. These trajectories end at some special points (i.e. critical points, were the gradient is zero) which correspond to maxima or minima in re(r). There are few minima in molecules, with the notable exceptions of points at infinity; more important are the maxima that correspond to the position of nuclei. Each nucleus acts as an attractor of gradient paths and every gradient path ends in a nucleus. Thus, the whole molecular space is partitioned into ‘basins’ WX, each one corresponding to the space spanned by the set of paths ending at a nucleus. The surfaces separating basins are mathematically defined in a univocal way It is possible to consider each basin WX as describing an atom X. The properties (i.e. the expectation values) of each ‘atom’ can be computed with the normal quantum mechanical rules, by integration of the function P(r)re(r) on the basin’s space (P(r) is the operator related to the property P).

All the gradient paths connect a nucleus with another zero gradient point; generally they are minima placed at infinity. A limited number of paths connect nucleus to nucleus: these are the signatures of chemical bonds. A more detailed characterization of the bonds may start from these ‘ridge’ trajectories (also called ‘bond paths’). By introducing the Laplacian function Ñ2re(r) into the analysis, the description of the bonding structure can be refined.
 
 

7. More about interpretative models

In the last section, we have presented a short, and extremely incomplete, exposition of methods addressed to the definition of basic units for the analysis and interpretation of the quantum mechanical description of a material model. Now we can use that as a basis for further considerations on interpretative models. Attention will be paid to some additional points: congruence with the development of the theory, innovation, the evolution of models, the role of analogy and cross-fertilization in the definition of models, the planning of the use of models, checks and assessment of models, and the competition among models.

Let us first consider again the definition of the basic subunits. Models directly based on atoms have shown significantly weak points. The evolution of the theory and the mathematical models has greatly reduced the direct use of that definition of subunits. The concept of atoms, so important in all the preceding chemical literature, can be recovered by other approaches, as the other two we have mentioned.

Models based on LOs are directly inserted in main stream quantum molecular calculations, almost all of which use the MO approach. That is one reason of the interest in such models. Innovation in this field dates back to the Boulder conference we have already quoted (Boys 1960); the evolution continues to be extremely active, especially for the study of material models of large size (Tomasi 1996b, Pomelli & Tomasi 1998).

Bader’s method gives a description of the electronic part of the molecule that has no direct relationship with the models elaborated during the early periods of quantum chemistry. That is quite a positive feature: there is always a need for innovation, and it is not easy to find really new ideas in the basic field of the analysis of molecular wave functions. Two generations, or more, of researchers have exerted their acumen to develop this subject [4].

The evolution of the Bader approach (I have given my view on it in a recent paper (Tomasi et al. 1996)) offers a good example of how an interpretative model begins, growths, and reaches maturity. Starting from a suggestion coming from some referent ‘real’ systems or from some other model, a provisional version of a new approach is set out. After it passes the stage of initial tests, it is refined in its physical and mathematical components and then generalized and extended for the application to other phenomena. Such an evolution is evident in the series of papers published by Bader over a range of almost forty years, but it may be found in the series of publications on other important methods as well. For the LO method, to which our group has made contributions of some relevance, such an evolution was expressly planned. Partial overviews of the application of this planning to some specific key problems have been published in the years (Bonaccorsi et al. 1980, Tomasi 1988, Tomasi et al. 1996).
 

7.1 Cross-fertilization

At the last stages of the evolution of Bader’s method before maturity, the corresponding analysis of the potential energy surface (PES) describing nuclear motions in the Born-Oppenheimer approximation was a great help. The PES analysis is the cornerstone in the theoretical study of molecular reactions and of molecular properties related to nuclear motions. The PES refers to another scalar function E(R) defined in the 3N-6 dimensional space spanned by the coordinates of the nuclei (N is the number of nuclei). The topological problem presented by the analysis of E(R) is by far more complex than that of the corresponding topological analysis of re(r) given by Bader. There is no need of analyzing the several reasons for the higher complexity of the PES, some are related to the higher dimensionality of the space, others to the necessity of considering at the same time several PES, that may intersect each other, merge and exhibit couplings of variable nature. The difference between the intrinsic complexity of the problems made easier Bader’s work to transfer the method into another context, but until now it also impeded the exploitation of further points introduced by Bader in his topological analysis of re(r) parallel to the analysis of E(R). The exploitation might be possible, however. This is meant as a suggestion for further work, and it is a possible example for another important aspect in the evolution of models and methods: the cross-fertilization among separate fields.
 

7.2 Analogy

Another way to proceed is by analogy. Is it possible to select other physical functions to submit to a treatment analogous to that introduced by Bader for re(r)? The answer is positive: there are many physical functions that correspond to scalar, vectorial, and tensorial fields; and for several among them, the topological analysis has been considered. We mention the local value of the kinetic energy Ke(r), the molecular electrostatic potential Vtot(r) (both of scalar type), the first order electron current density j(1)(r), the molecular electric field Ftot(r) (both of vectorial type). Until now, the approach has been applied to few physical functions, however, and more could be explored. We shall not linger on the analysis of these extensions of a basic methodology, but prefer to pass to another point.
 

7.3 Interactions among subunits

The definition of basic subunits is not the final point of an interpretative method. For an interpretation we have to use our basic subunits to examine the interactions between them and to quantify their contributions to a given phenomenon (e.g., the value of some physical observables, the output of a chemical process, etc.). Interaction between subunits also means changes of the properties of the subunits. This is an essential point in chemistry. Since the very beginning of modern chemistry, it has been recognized that there is a rationale behind the enormous variety of chemical phenomena, represented by the presence of ‘chemical functions’ having (almost) invariant properties in different chemical compounds. Since a century, at least, we also know that these ‘functions’ are groups of atoms, localized in the molecule. Among the first objectives of applying an interpretative method, there is just the determination of the degree of transferability of ‘identical’ subunits from molecule to molecule, the change of their properties when assembled into a chemical group (subunits used in the analysis are always smaller than chemical groups), and the interactions among subunits and groups of one or more molecules.

The research program requires accurate planning and considerable computational efforts. A set of checks must be made at different levels. There are checks on the model in use to measure the transferability of the subunits. There are also checks including other models, in order to compare and to contrast the definition they give of chemical groups and their properties (it must be recalled that also chemical groups are models; hence comparing different renderings of the same model is a legitimate procedure). Finally, there are checks concerning the measured properties of the real systems, because all our work on models has the ultimate goal of understanding the properties of real matter.

When an interpretative model positively passed that set of checks, the researcher is authorized to apply the model. (For practical reasons, the ideal sequence of steps is not faithfully respected, in general. The model is applied simultaneously to, or even before, the necessary checks. It depends on the sensibility and responsibility of the research leader to find a reasonable compromise between methodical carefulness and speed of research progress).
 

7.4 Falsification versus validation

The check of procedures (or models) requires an additional remark. In looking at the ample literature on that topic, which covers very disparate procedures and models, it is obvious that almost all checks are addressed to the validation and not the falsification of the method. There are several reasons for preferring the validation approach. One reason is that we are examining models and not basic theories. Due to the incompleteness of a model per definition, there are aspects of the referent not fully or faithfully described. Models, at least our models, can be applied at different levels of accuracy of the mathematical apparatus (that is another aspect related to simplicity). It is necessary to explore the performance of the model over large ranges of the possible variables, in order to use the failures for defining the limits of the model’s validity and the degree of simplicity at which it gives meaningful results. On the other hand, if the model shows a too high rate of failures, it is necessary to draw well-documented conclusions about its limits and expel it from the listings of recommended methods. I am not aware of a satisfying analysis of the problem of testing models. It is a problem of great importance in all fields of science.

The application of an interpretative model should involve two aspects of different nature: analytic and synthetic. The analysis consists in the examination of the properties of the subunits and how they are combined to give the property of the whole system; the synthesis consists in the formulation of opportune numerical experiments to understand how subunits are modified (with respect to some standard) in the whole system.

Here another criterion to judge models is important. All the models we have considered in this paper cannot be applied without the use of computers. Thus, the complexity of calculations necessary to apply a model is no more a discriminating parameter. That is why I put the emphasis on conceptual simplicity, when introducing simplicity among the criteria (Sect. 5). However, the need of exceedingly complex calculations may hamper the sequence of checks, and, what is perhaps more important, the development of numerical procedures for the step of synthesis just mentioned.
 

7.5 Competition among models

All the considerations I have reported play a role in the competition among models. Let us try to apply them to the two approaches I have selected for this analysis, the topological Bader approach and the LO procedures.

Bader’s model has precision, self-consistency and robustness (there are no restrictions introduced in its basic element, re(r)). Its generality has been shown by its extension to other mathematical functions. Many efforts have been spent by Bader and others to assess the usefulness of the approach, with excellent results. Many ‘chemical concepts’ have been recovered within Bader’s method directly or indirectly. As an example of direct recovering, I quote atomic charges, which are not an observable but a chemical concept; two examples of indirect recovering are bond polarity and electronegativity, which were not explicitly introduced in the model). ‘Surprises’ have also been found.

In contrast, LO methods have less precision. The robustness is similar. I did not present material to judge the generality of that approach, because a survey of the achievements, on which many researchers have spent their efforts, would be too long. However, it may briefly be stated that the area of successful applications if by far larger than that covered by the Bader’s approach until now. The LO approach has been rich of surprises, in every fields of chemistry, and has given very important contributions to the developments of other models (not only for the interpretation, but also as a result of the interpretation), e.g. in the study of biological systems. Research activity has strongly contributed to renew the older chemical concepts and to add new items to the list.

The different output of Bader’s approach and the LO approach is partly due to the computational complexity of the former. The computer time necessary to give a complete Bader analysis is generally larger by one or two orders of magnitude than that necessary for the calculation of re(r)). A LO analysis requires a fraction of the computational time necessary to get the wavefunction. That difference in simplicity has made easier the development of synthetic tools of interpretation combined with analytic tools, leading to strategies for a remarkable number of complex problems. The most significant contributions of our group, developed over a range of about forty years, have been centered on the development of a model (a ‘hypothesis’, according to our terminology, to be verified step by step) called ‘semiclassical approximation’ by which the elements of quantum calculations are interpreted and recast in the language of classical physics. In this model LO methods play an important role.

Interested readers may find in some papers of our group (Alagona et al. 1988, Tomasi et al. 1991a/b, Tomasi et al. 1996) a more detailed exposition of the underlining methodological considerations as well as an overview of the fields of applications, which range over the analysis of structure and properties of molecules, molecular interactions, photochemistry, chemical reactions, the modeling of drugs and large systems, and solvent effects on all the above mentioned topics.
 

7.6 Other interpretative models

I have spent much space to examine few interpretative models for the analysis of quantum mechanical calculations of molecular systems. The exposition of models belonging to this category has been limited to a small selection of models in the Born-Oppenheimer approximation (and I have profited from giving a perhaps too partisan view of part of my scientific interests). Theoretical chemistry is by far richer. Motions of nuclei have not been considered here, in spite of their importance in chemistry; the explicit inclusion of time in the physical model for the study of system dynamics would have open another dimension. Also the influence of the surrounding medium on the focal component of the model has been neglected; almost all chemical phenomena occur in condensed phases often of heterogeneous nature. The consideration of both aspects would have noticeably enlarged the present picture of interpretative models (and would have given me another opportunity of a partisan exposition of my other scientific interests). The idea of drawing up a list of further examples of methodologically significant fields, in which I am less an expert, brings me to a state of despondence: the variety of problems, and of methods, is so large that it is difficult to give a meaningful selection.

However, I will add a remark on one particular point. We have considered methods mainly addressed to the analysis of the diagonal element of the one-particle density function. That function characterizes well the distribution of particles (electrons, in particular) but it is only implicitly dependent on the interactions between particles. The main body of information about interactions is carried by the two-particle reduced density function. Interactions are by far more abstract than particles, but a proper understanding of their behavior is necessary. The same sequence already examined (definition of the basic subunits, methods of computation, methods of analysis, etc.) were applied to the study of the interactions between particles, a subject, more difficult to treat than that of the particles. New concepts were introduced, with names not yet largely spread in the chemical community as ‘intracule’ and ‘extracule’; that has initiated their progression in the ‘scale of reality’ of models as molecules and electrons did in the past. People working in this field think in terms of intracule and extracule densities as if they were particles, and subject them to the same topological analyses as electrons. One of the first things done in these studies, of recent development, was the elimination of concepts defined in a vague manner. The lesson of the past has not been forgotten.
 
 

8. An attempt of conclusion

Since it is not possible to document the status of interpretative models in current research more exhaustively here, the reader may accept as a bona fide statement my strong belief that modern methodological studies have almost completely eliminated vague notions, like ‘driving forces’ or ‘effects’ of various denomination. All the notions, concepts, and other tools newly introduced into the theory, as well as the reformulation of the older ones that have been maintained, are accompanied by precise indications of how to define and measure them.

This outcome of theoretical methodology was not so obvious, indeed. Both in its first and second stage (1900-1930 and 1930-1960) theoretical chemistry worked on concepts for which direct checks and quantitative evaluations were not possible. The interpretation of electronic molecular structures according to the Lewis rules and according to the VB mesomeric structures are two examples, drawn from the first and second period, respectively. The analysis of this essential point of chemical theory, the description and interpretation of molecular electronic structure, was performed within a methodological framework not corresponding to that used in the main core of chemistry.

The former approaches gave important and enlightening results, there can be no doubt about it. They permitted the formulation of general rules, the prediction of trends, the establishment of a rationale for many intriguing properties – but on a methodological basis delicate to handle that required a considerable scientific maturity of the researcher to express these qualitative rules. Theoretical chemistry, as all the other scientific disciplines, has a democratic structure (Preti 1957): that means its use should not be limited to people having a peculiar high level of maturity, prudence, and self-control. Procedures have to be carefully described and documented, the results should be reproducible. We may envisage a possible evolution of theoretical chemistry after the Second World based on a non-democratic methodology. The outcome would have been a scientific body with few direct links to the main body of chemistry, leaving the latter without a serious and complete theoretical framework. Such would have been the result, without the efforts spent in the last 40 years to impede the splitting of quantum chemistry into separate disciplines as sketched in Sect. 3.

In Section 2, I pointed out that the methodological ‘hard core’ of chemistry has been preserved several important characteristics during its evolution along the past and the present century. One is the attitude of giving no space to ‘metaphysical’ concepts. Its importance has also been stressed: under that aspect, chemistry paved the way for other scientific disciplines and enjoyed for long time, in the view of educated people, the advantages of being the first important field of science that has a complete and coherent approach to scientific inquiry.

The cutting edge of the discipline, the theory, has now recovered this position. The number of quantities having citizenship in the methodological framework of chemistry has been greatly increased compared with the past century. The macroscopic quantities I mentioned at the beginning continue to play their role. In the meantime, a large number of quantities derived from more complex instrumentation techniques have been added to the ‘hard body’. The concepts of theoretical chemistry may now be considered as additional homogeneous components of this ‘hard body’.

It may be said that we chemists have experienced a crisis (a growth crisis) partially summarized in this analysis, and that we are now recovering. Part of the crisis has been settled within a relatively limited number of years (at the beginning of the century), another part, regarding the theoretical foundations, took almost seventy years to find a solution. A too long time. In addition, help was necessary from another discipline, computer engineering, to reach a satisfactory solution. It has been a serious crisis, indeed.

I have been a bit too optimistic in the preceding sentences. It could be read as an expression of the ‘optimism of the will’, as Gramsci said in a celebrated sentence, but it must not be contrasted with the ‘pessimism of the reason’, to complete Gramsci’s sentence.

I have nothing to detract from the feelings I have expressed here, but there is a need of fully convincing the chemical community of their validity. There is a large and growing awareness of the role theoretical and computational chemistry may play in chemical research. The awareness is far from being complete however; the attitudes described for example by Counts (1989) are still surviving in many laboratories. What is more important, it is necessary to convince the chemical community that the rigorous definition of interpretation methods in theoretical chemistry gives more strength to the intellectual dignity of the discipline. This awareness has then to be transmitted at the level of the civil society. Laymen need to be convinced that chemistry is not a technical discipline, that chemists are not only experts for technical problems [5].

Theoreticians have further tasks. The program of bringing the existing body of theoretical methods to the methodological status we have depicted is not completed yet. In some cases (mainly at the periphery of theoretical chemistry) the program of redefinition of concepts, verification, comparison among competitive models, validation (falsification, when necessary) is even not yet initiated. It requires great efforts, but it is necessary. In other cases, the analysis has been done, but researchers prefer to use simpler models even beyond the limit of applicability. Such a practice is justified by saying that a simpler model, even when known to be inaccurate, gives some hints and that studies that are more accurate will be given in the future. In some cases, such a position is near to malpractice that should not be tolerated in other fields of chemistry. In any case, the theoretical community has to exert its influence, by encouraging methodological studies for required applications of the theory, and by rejecting attempts of malpractice with the instruments given by the peer reviewing procedures.
 

8.1 Further tasks for theoreticians

Another point, more important than the preceding ones, must be stressed that gives a different perspective to the analysis I have presented. The rapid evolution in chemistry (in all the fields of chemistry) is another empirical evidence for which we shall not give examples or quantification. The evolution is also reflected in the theory. If asked to express a forecast for the theoretical activity in chemistry in the next future (the impending beginning of the new millennium stimulates such forecasts), I would surely indicate a vigorous increase of the modern methods of theoretical-computational chemistry; not only for interpretation, but also to get data about molecules, reactions, and materials, in strict cooperation with the experimental branches of the discipline. However, I would also point to the impending occurrence of another crisis; no more due to the quantum theory but to the inherently complex structure of the problems to which chemistry is extending its frontier.

There are clear indications that the challenge presented by these problems stimulates the use of new ‘concepts’ and procedures, not having an adequate theoretical foundation and a detailed and accurate quantification. I am not speaking of the ‘hybrid methods’ that have now a large popularity in theoretical chemistry (Tomasi et al. 1998): their lack of self-consistency may be tempered by refinements of the models and by the use of the appropriate techniques of validation. The work is in progress and there are no doubts that hybrid models will rapidly reach the status of more legitimate and standard procedures. I am rather thinking of others approaches, generally of non-quantum nature, among which is a large portion of the holistic models I mentioned before. Some of them found technical support and theoretical motivations in another discipline, as information theory; others have a less defined collocation, combining elements from specialized branches of mathematics, engineering, and physics.

If this forecast is correct, our preceding analyses of the taxonomy of models and the structure they have in theoretical chemistry should be revised. The final objective of these renewed analyses would remain the same, however: theoretical chemistry has to preserve its status of democracy, and its recently regained coherence with the methodological main body of chemistry.

There is no reason to be deceived or offended by the occurrence of a new crisis due to the introduction of concepts and methods of ‘nonlegitimate’ origin. The progress (stimulated in this case by advancements of the discipline, especially in the field of complex systems and materials, and not by changes in the underlying theory) may perhaps proceed faster by provisionally neglecting preoccupations about precision, self-consistency, congruence. The community of theoreticians has to accept it. However, by exploiting the experience of the past and by critical awareness of it, we should be able to resolve the new growth crisis much faster. That will require again considerable efforts to methodologically rework the new theoretical elements, with the necessary ingenuity and acumen. The theoretical community has strength enough to do it.

I have concluded my remarks about the structure of theoretical chemical concepts in past and present times with a brief view into a possible near future. I may now come back to the question I put at the beginning of this essay. It is clear that I gave no real answer, and probably the themes I touched in the preceding pages are not the most important ones to understand why chemistry gives such a small contribution to the critical discussion of the scientific problems of general interest for our society. I hope that the second objective motivating this analysis, to give more strength to the effort spent by many theoreticians to improve the methodological status of the discipline, and to express some guidelines about the prosecution of this effort, has been in part reached.

Among the remarks I expressed, there is a point that may give some hints about an answer to the initial question. I said that the theoretical activity made possible by the advent of computers has been to a great extent done ‘at home’, in laboratories not expressly created for it, and with little support by the large computational centers that were created in the years to satisfy other needs. Computational chemistry prospered with a decentralization of the modes of productions, maintaining aspects of a "cottage industry" (Bolcer et al. 1994). There are some reasons for this outcome: some are related to the policy of governmental funding (in the U.S.A. and then, by imitation, in the other countries); others are related to advances in computer hardware (since several years small computers at low cost are more convenient than larger computers run by national centers). Another motivation, of more general character and briefly mentioned before, is worthwhile to be considered here again.

The practice of developing at home the instruments of calculations and analysis, of which I stressed some inconveniences, is actually congruent to the very nature of chemistry, addressed to the accurate study of a myriad of problems, each requiring separate considerations, each presenting new problems and opening the way to unexpected developments. Chemistry has evolved and progressed in that way and continues to do. The formulation of very big research projects on which all efforts concentrate, draining financial and human resources from other more scattered activities, does not correspond well to the intrinsic structure of chemistry.

In fact, there is no significant ‘big science’ in chemistry. Big science has first been realized for military purposes (nuclear physics) and then easily extended to elementary particle physics (it would be interesting to analyze as to what extent the quantum complementary between particles and interactions has been exploited to support this realization of big science). Experience has shown that big science is paying well at the political level. This aspect has encouraged many other big science projects, covering almost all the fields of science. There is no need of giving complete listings. The Genome project for biology, the Mohole project for geology, and the Hubble station for astronomy are sufficient to show that there have been explicit efforts to formulate big science projects. While some projects were well justified and others less, all have as complementary effect that they arouse the attention of media, politicians, and laymen. This means that scientific disciplines with big science projects have a greater opportunity to affect the general opinion by more basic issues.

Provisionally accepting this analysis as correct, I end by putting another question. Is it convenient to increase the impact of the ‘chemical way of thinking’ on discussions about knowledge, science, and other human intellectual and practical interests by making efforts for big science projects in chemistry? My answer is negative. Chemistry proceeds better with widespread approaches, searching union, when necessary, in a pragmatic and flexible way on more restricted problems, as it has done in the past. There is a continuous line, starting with the balances and flasks of Lavoisier and ending in the present research laboratories, where computers and theoreticians have a place, that better defines the contribution chemistry may give to the human community.
 
 

Notes

[1]  Since I am not aware of studies on the contribution of sciences to the political language, my preceding statements are solely based on personal readings, quite unsystematic. The main trend seems to me evident enough, but I am unable to quantify and define timings in a less vague manner. In his book on the historian’s craft, the great historian Marc Bloch (1949) wrote sharp remarks on the intrinsic difficulties of his own discipline to build unarbitrary concepts like the way chemistry did. Maybe the has been done in part by introducing ‘chemical metaphors’ into history.

[2]  I shall confine myself to research in theoretical chemistry: the extension to other fields of chemistry would require some changes and additions, but the essential points remain unaltered.

[3]  It is my firm belief that the combination of MO, LO, and NO constitutes the best definitions of subunits we have now at the quantum level. For reasons of space, I shall not present detailed arguments for this belief. Interested readers may compare my view with the opposite view expressed some years ago (Trindle 1984). Such a comparison should consider the influence of the evolution of the mathematical models on methodological considerations and the remarkable progresses in MO based models.

[4]  It must be added that there are other new ideas formulated and developed in recent years. Reasons of space do not allow including them into the present analysis. I recall, as an example, the Parr elaboration of the density functional theorems that has shed new light on several important chemical concepts, and permitted the introduction of other new concepts (Parr & Yang 1989). The impact of the interpretative approach of these new concepts is quite remarkable.

[5]  There is lot of confusion with reference to technicians of other origin and other expertise. Many technical problems are in fact related to interdisciplinary projects: the case I recently noticed in the Italian TV of an expert of nutritional sciences gravely expressing opinions about the possibility (or better, impossibility) of modifying chemical equilibria via substitution of chemical groups is an indicative example.
 
 

References

Alagona, G.; Bonaccorsi, R.; Ghio, C.; Montagnani, R.: 1998, ‘Towards a unified view of the description of internal and external fields acting on chemical functional groups’, Pure and Applied Chemistry, 60, 231-244.

Bader, R.F.W.: 1990, Atoms in Molecules: A Quantum Theory, Oxford Univ. Press, New York.

Bloch, M.: 1949, Apologie pour l’histoire ou métier d’historien, Paris (English edn.: Historian’s Craft, Vintage Books, New York 1964).

Bolcer, J.D.; Hermann R.B.: 1994, ‘The Development of Computational Chemistry in the United States’, in: K.B. Lipkowitz, D.B. Boyd (eds.), Reviews in Computational Chemistry, vol. 5, VCH Publishers, New York, pp. 1-63.

Bonaccorsi, R.; Ghio, C.; Scrocco, E.; Tomasi, J.: 1980, ‘The effect of intramolecular interactions on the transferability properties of localized descriptions of chemical groups’, Israel Journal of Chemistry, 19, 109-126.

Boys, S.F.: 1960, ‘Construction of Some Molecular Orbitals to Be Approximately Invariant for Changes from One Molecule to Another’, Reviews in Modern Physics, 32, 296-299.

Bunge M.: 1985, La investigación científica, Ariel, Barcelona.

Coulson, C.A.: 1960, ‘Present State of Molecular Structure Calculations’, Reviews in Modern Physics, 32, 169-177.

Counts, R.W.: 1989, ‘What is Research?’, Journal of Computer-Aided Molecular Design, 2, 329.

Dirac, P.A.M.: 1929, ‘Quantum mechanics of many-electron systems’, Proceedings of the Royal Society (London), A 123, 714-733.

Eyring, M.; Walter, J.; Kimball, G.E.: 1944, Quantum Chemistry, Wiley, New York.

Maksic, Z.B.: 1991, ‘Modelling – A search for simplicity’, in: Z.B. Maksic (ed.), Theoretical Models of Chemical Bonding. Part 1: Atomic Hypothesis and the Concept of Molecular Structure, Springer-Verlag, Berlin, pp. XIII-XIX.

Pauling, P.: 1939, The Nature of the Chemical Bond, Cornell Univ. Press. Ithaca

Parr, R.G.; Yang, W.: 1989, Density Functional Theory of Atoms and Molecules, Oxford Univ. Press, New York.

Preti, G.: 1957, Praxis e Empirismo, Einaudi, Torino.

Primas, H.: 1983, Chemistry, Quantum Mechanics and Reductionism, Springer-Verlag, Berlin.

Tomasi, J.: 1988, ‘Models and modeling in theoretical chemistry’, Journal of Molecular Structure (Theochem), 179, 273-292.

Tomasi, J.: 1996a, ‘Quantum Chemistry: the New Frontiers’, in: Y. Ellinger, M. Defranceschi (eds.), Strategies and Applications in Quantum Chemistry, Kluwer, Dordrecht, pp.1-17.

Tomasi, J.: 1996b, ‘Boys’ contribution to the evolution of molecular quantum mechanics’, in: S. Bachrach (ed.), 3rd Electronic Computational Chemistry Conference, (unpublished; a copy may be asked from the author).

Tomasi, J.; Alagona, G.; Bonaccorsi, R.; Ghio, C.; Cammi, R.: 1991, ‘Semiclassical interpretation of intramolecular interactions’, in: Z.B. Maksic (ed.), Theoretical Models of Chemical Bonding. Part 3: Molecular Spectroscopy, Electronic Structure and Intramolecular Interactions, Springer-Verlag, Berlin, pp. 545-614.

Tomasi, J.; Bonaccorsi, R.; Cammi, R.: 1991, ‘The extramolecular electrostatic potential. An indicator of chemical reactivity’, in: Z.B. Maksic (ed.), Theoretical Models of Chemical Bonding. Part 4. Theoretical Treatment of Large Molecules and Their interactions, Springer-Verlag, Berlin, pp. 229-268.

Tomasi, J.; Mennucci, B.; Cammi, R.: 1996, ‘MEP: a tool for interpretation and prediction. From molecular structure to solvation effects’, in: J.S. Murray, K. Sen (eds.), Molecular Electrostatic Potentials. Concepts and Applications, Elsevier, Amsterdam, pp. 1-103.

Pomelli, C.S.; Tomasi, J.: 1998, ‘QM/MM’, in: P.v.R. Schleyer, N.L. Allinger, T. Clark, V. Gasteiger, P.A. Kollman, H.F. Schaefer III, P.R. Schreiner (eds.), The Encyclopedia of Computational Chemistry, Wiley, Chichester, vol. 4, p. 2343-2350.

Trindle, C.: 1984, ‘The Hierarchy of Models in Chemistry’, Croatica Chemica Acta, 57, 1231-1245.


Jacopo Tomasi:

Dipartimento di Chimica e Chimica Industriale, Via Risorgimento 35, 56126 Pisa, Italy; tomasi@dcci.unipi.it

Copyright Ó 1999 by HYLE and Jacopo Tomasi