The Structure of Chemistry
Ted was well known for his wide interests, and prominent among these were philosophy of science and science and religion. As a professional chemist he felt that the philosophy of science should not always be written from the point of view of the physicist. The first part of The Structure of Chemistry illustrates that much of his approach to the philosophy of science was based on a more than superficial knowledge of the history of science. As a university teacher, Ted believed that students should appreciate the wider aspects of their subject, such as the status of scientific theories. His support for the view that students should be exposed to a little history and philosophy of science was sometimes opposed by his university colleagues, who argued that this would steal time which could be devoted more usefully to cramming them with more chemical information.
As a Christian and a Roman Catholic, Ted saw science and religion as having complementary views of the world. He insisted on the rational nature of Christianity and, in his book The Power and Limits of Science (1949), argued that science could not provide the answer to the whole of human experience. At the University of Kent in his later years he was the prime mover in establishing an interdisciplinary Science Studies Group, which met on a regular basis and brought together academics from the different Faculties for a talk and discussion, followed by dinner, meetings which, alas, are no more.
Maurice Crosland, University of Kent
As a preliminary indication of its place on the map of knowledge, chemistry may be characterized as a natural science, a physical science, and an experimental science. The expression ‘natural science’ marks off the kind of question asked in chemistry from those asked in philosophy. The term ‘physical sciences’ here denotes those sciences which deal with inanimate matter and use measurement as their fundamental tool. Some of these are mainly observational, as for instance geology and astronomy, and so differ somewhat in their procedure from the experimental sciences. The experimental physical sciences are physics and chemistry. Although these sciences have traditionally been distinguished, for historical and accidental reasons, it is hard to see that there is any fundamental difference between them. There is certainly a difference of emphasis, in that most chemists are on the whole more interested in phenomena that depend on the specific properties of particular kinds of material. But in method, type of evidence, and type of conclusion there is no fundamental difference. Indeed, there are already wide borderlands known as physical chemistry and chemical physics. Chemistry is a quantitative science, based on measurement; it shares with physics both the analytical power and the limitations of the metrical approach.
There is no need to emphasize here the limitations of a science concerned
with the measurable properties of material objects. Naturally, if we confine
ourselves to measurements as evidence, we can only reach the conclusions
they are capable of yielding, namely, laws describing phenomena and theoretical
equations or models to explain them. On the other
hand, it is important to appreciate the advantages of the metrical approach,
which has turned out to be the key to questions of the kind asked in the
physical sciences. Before modern chemistry was developed, the alchemists
were adepts at the observation of qualitative changes during chemical reactions
– changes of colour, clarity, volatility, and so on – but there was little
progress in the theory of chemical change until the quantitative use of
the balance was recognized as decisive. The approach by measurement has
been the successful one and it is now permanently built into chemistry.
These hypotheses were uniformly confirmed when the more direct methods of investigation by diffraction of X-rays, electrons and neutrons, and by spectroscopy, were developed in the twentieth century. For simple molecules, detailed models specifying the positions and sizes of the atoms, and their motions, can now be constructed. These enable us to visualize chemical compounds and their reactions in some detail, and constitute a very powerful body of theory. Meanwhile, physical chemists since the 1880s have also built up a theory of electrolyte solutions in terms of ions – charged particles of molecular dimensions – by means of which the conductivity and other properties of these solutions can be quantitatively interpreted.
More direct evidence that matter can be broken down into small discrete units of definite types is provided by experiments which detect single particles, such as those which use scintillating screens, Geiger counters, cloud-chambers, bubble-chamber and photographic emulsions. The successful interpretation by Perrin, in 1909, of the Brownian movement in terms of random molecular motion, and the derivation from this and other experiments of consistent values for the number of molecules in a given amount of material, point in the same direction.
The mode of linkage of atoms aroused speculations from the earliest days of the theory. Berzelius, for example, thought it might be an attraction between opposite charges. Rutherford’s picture of the atom (1912), as a nucleus surrounded by planetary electrons, led to the view that atoms might become linked either by exchanging an electron or by sharing a pair of electrons. By the time that Bohr and Sommerfeld had developed the theory of electronic orbits to account for spectra, with the aid of the old quantum theory, the electronic explanation of chemical affinity had reached and advanced stage. With the advent of wave-mechanics in 1926, the theory had to be recast; it is still in active development.
The atomic theory came rapidly into use, but sceptical comments were
heard throughout the nineteenth century. At first
these were due to the difficulty mentioned above about such molecules as
O2 and the consequent uncertainties about atomic weights. When
the Royal Society awarded a medal to Dalton in 1826, it was for his work
on combining weights, which the President (Sir Humphrey Davy) was at pains
to distinguish from the atomic hypothesis. In mid-century the scepticism
seems to have been influenced by the positivistic teaching of Comte in
Paris, and later by Mach. A lecture by Williamson to the Chemical Society
of London in 1869 aroused considerable criticism because he was thought
to have presented the theory not as a hypothesis but as a fact. The notion
that the atoms in a molecule might be in a definite spatial pattern was
received with derision by some competent chemists, among whom was Kolbe.
Around the turn of the century, Ostwald was arguing that though chemistry
needed the atomic theory, its truth could not be proved. The adaptability
of the theory made it invaluable in exposition, but its appeal to unperceivable
entities was distasteful. We have to remember that until late in the nineteenth
century there were no phenomena attributable to single molecules. But after
the interpretation of the Brownian movement in terms of the motion of individual
molecules, and the observation of effects attributable to single atoms
as in the spinthariscope, scepticism about the theory seemed to have ceased.
It would seem to be impossible now to formulate chemical theory without
appealing to atoms, their electronic structures, and their spatial arrangements
The energy of a system in the course of a given change in its state depends on the heat absorbed, the work done, and the material gained or lost by the system. Energy is a mathematical function related to these quantities, which are in turn related to observable quantities. The justification for the definition of energy as a thermodynamic function is to be found partly in the direct experimental evidence for the interconversion of heat and work, and partly in the experimental verification of its consequences. Work and heat are often called ‘forms of energy’, but we should beware of thinking of them as if they were the same ‘stuff’ in different forms, or as if energy were some kind of fluid. Energy is simply a mathematical function, defined in terms of quantities that can be experimentally determined.
Chemistry gained greatly from the application to pure substances of the notions of energy and entropy. The idea of ‘chemical affinity’, for example, could now be put on a quantitative basis, in terms of free energy and equilibrium constants. Chemical thermodynamics today consists of a rigorous mathematical scheme deduced from the first and second laws of thermodynamics, together with a vast mass of experimental data allowing the application of this scheme to actual phenomena. The degree of systematization achieved is very remarkable. All this, it may be noted, is quite independent of the atomic theory.
The application of similar ideas to the molecular picture was equally
fruitful. The first adequate kinetic theory of gases was published by Clausius
in 1857, and later developed by Clerk Maxwell and others; it accounts successfully
for many of the physical properties of gases. Chemical properties have
been handled by statistical mechanics, which relates them to the properties
of individual molecules. It is possible in simple
cases to calculate equilibrium constants, for example, from the detailed
molecular models that result from current structural investigations. This
development has given the molecular picture a striking power of quantitative
explanation, and thereby strengthened the evidence for it. Finally, the
application of quantum mechanics to the detailed structure of molecules
has led to theories of valence which, though their development requires
complex mathematics, are beginning to explain chemical affinity.
In chemistry there are two different types of empirical law, based directly on the experimental data:
(a) Functional relations between variable properties of a given system; for instance, the relation between the temperature and volume of a gas at constant pressure, or between specific heat and temperature, or between the rate of a reaction and the temperature. Correlations of this type, concerned with co-variance, are the constant preoccupation of physical chemists, and increasingly of organic and inorganic chemists also. Some of these functional relations state the properties of pure substances (for instance the specific heat curve of a substance as a function of temperature); others are concerned with rates and equilibria in physical or chemical changes.
(b) Laws stating that there are kinds of material, with reproducible properties, such as hydrogen, sulphuric acid, or common salts. The evidence is that the chemist’s ‘pure substances’ exhibit a constant association of characteristics, which are correlated, each with the others, in the sense defined above. Thus hydrogen has (under given conditions) always the same boiling-point, density, spectrum, and chemical properties. The whole of chemistry depends on our being able to isolate pure substances with reproducible properties. This fact, incidentally, refutes the claim, sometimes heard, that the physical sciences seek functional relations only, never definitions of ‘natural kinds’ as in biological classification.
Such laws are always subject to correction; they do not attain certainty. It is always possible that some relevant factor has been overlooked. Thus, the melting-point of ice would once have been expressed simply as 0° Centigrade; but investigation showed that pressure has an appreciable effect on it, so that in an exact account of ice we must now state its melting-point as a function of pressure. Again, ordinary hydrogen was thought until 1931 to be a pure substance, but is now recognized to contain a little deuterium, which can be isolated and has markedly different physical properties. Empirical laws are continually being improved and made more accurate, more specific, and more precise. In retrospect these improvements can be attributed to the results of tests of the law under diverse circumstances. The corresponding experimental rule, used by every chemist when he has to proceed purely empirically, is ‘vary one factor at a time’. But this is not the whole, or even the half, of scientific method; for the art lies in guessing which factors are relevant. (This was the element missing from Bacon’s account of scientific method.)
‘Second-order’ empirical generalizations also find a place in chemistry.
The Periodic Table, for instance, embodies a great many of these regularities,
such as the resemblances between the halogens, and the gradations in their
properties. Other examples are the general rule that gases have similar
properties at temperatures proportional to their critical temperatures
(the ‘law of corresponding states’), and the rule that salts which are
comparatively involatile are also comparatively insoluble in organic solvents.
Many such rules about chemical reactivity are used in the synthesis of
new organic compounds. ‘Second-order’ laws such as these are naturally
more subject to exceptions and corrections than laws derived directly from
experiment. They have played an important part in the construction of theories;
the Periodic Table, for instance, was of great help in assigning electronic
structures to the elements.
Many attempts have been made to solve, or to dissolve, the problem of induction. Some have attacked it with the aid of the theory of probability, but without success; our confidence in scientific laws cannot be given a numerical measure, unlike our expectation of life or our hopes at roulette, to which the calculus of probability properly applies. Others have tried to by-pass the difficulty, by saying that scientists do not make categorical general statements, but only postulate and test hypotheses (Sect. 4.3); however, these hypotheses are certainly meant to be of general application, and the same difficulty arises as before. Others, more radical, argue that scientists should not make general statements at all, but only tell us to expect certain results in certain circumstances; but this does not explain the scientist’s confidence in the general statements which he undoubtedly makes. The whole problem is still open and seems to call for a new approach.
It seems clear that we cannot refute the objections of a sceptic who
will accept science only if he can reduce it to deduction; we can only
hope to show him that his demands are unreasonable. But it remains open
whether we should regard scientific reasoning as sui generis and
in some way self-evidently reasonable, and reject attempts to relate it
to other forms of reasoning; or whether it would be profitable to consider
it in relation to a broader investigation of interpretation as a
mode of passing from evidence to conclusion. Understanding by interpretation
of signs is certainly commoner than deduction in our commonsense knowledge,
such as that by which we recognize friends, judge weather prospects, or
predict the outcome of a lawsuit or a public debate; it is prominent also
in legal procedure, in historical investigation, and indeed in most fields
of enquiry. It is also concerned in the construction
of scientific theories, to which we now turn.
Scientists influenced by positivism have often held that theories express nothing that was not in the laws, and therefore do not explain. Pierre Duhem, for example, held with Mach that the aim of theory is intellectual economy. "A physical theory is not an explanation. It is a system of mathematical propositions, deduced from a small number of principles, which aim to represent as simply, as completely, and as exactly as possible a set of experimental laws." The same notion is perhaps implicit in contemporary comparisons of a theory to a map; for a map is a guide to the physical features of a landscape but does not explain them.
If a theory were to do no more than represent laws in a compact form, it would be deducible from the laws by strict formal logic. It would therefore contain no terms that are not to be found in the laws (or else derivable from them by a formal definition, such as that of the term ‘energy’ as it occurs in the law of conservation of energy). This may be the case for certain abstract mathematical treatments of physical phenomena, such as Fourier’s theory of heat, or Ampère’s of electrodynamics; probably it was theories of this type that Duhem had in mind. But the case is otherwise for the theories that we have noted as typical of chemistry. This is particularly obvious for the atomic-molecular theory. Dalton did not deduce his theory from the laws of chemical composition, nor could it have been so deduced, for it makes statements about entities that are too small to be perceived, and so contains terms that cannot even in principle be deduced from the laws that are taken to support it. In fact Dalton invented the theory with the aid of his imagination, as an interpretation of certain observations, and adjusted it until a variety of its consequences agreed with known laws. The theory is a construction, not a deduction. It goes beyond representing the laws; it interprets them.
The example of atomic theory seems to be decisive, since every chemist today would regard it as indispensable, for the reasons given earlier. Duhem, writing before the developments of the last half-century, regarded such models as mere crutches for the imagination, or (to change the metaphor) as so much scaffolding which could be discarded once it had given access to the correct abstract relations. But even in Duhem’s time there was a theory, essential to chemistry, which did not fit his criterion, namely the classification of pure substances into elements and compounds (see Sect. 2.2). This was not a mere re-statement of the laws describing the substances formed in chemical reactions between given reagents. It was not forced upon chemists as a deduction from these laws; it was an interpretation of them by a set of hypotheses, stating which substances are elements and which are compounds. These hypotheses were supported by the facts inasmuch as they led to a self-consistent account of chemical changes. We are so convinced of the truth of the resulting classification, and so accustomed to speaking of reactions as ‘decompositions’, ‘substitutions’ and so forth, that we tend to forget that these are not simply empirical descriptions, but depend on additional hypotheses. This is easier to realize when we remember that Lavoisier’s theory had to meet an alternative classification provided by the phlogiston theory, which, after Cavendish had revised it, was a respectable hypothesis and could account plausibly for the chemical facts then known. Again, when Davy prepared sodium from caustic soda by electrolysis, he supposed that the reaction was a decomposition of the caustic soda; but Dalton, remarking that water was present and would produce hydrogen, preferred to regard caustic soda as an element and sodium as a hydride. It is clear that the classification of these reactions is hypothetical.
The question of the explanatory function of theories arises here, as the quotation from Duhem shows. If a theory were simply a compact re-statement of laws, it could not be said to explain or interpret them. It would be simply an instrument or convenient calculating device for making correct predictions, as, for instance, astronomers may use the laws of planetary motion to predict eclipses. This is the role assigned to theory by a variety of views which may be called ‘reductionist’, since they seek to reduce theories to re-statements of observations. An example is the operationalist view, which would reduce the meaning of theoretical concepts to the operations that have to be performed in testing the theory. This view can give a good account of measurement, but it breaks down when applied to chemical theory, for the meaning of terms such as ‘atom’ is not defined by the operations that yield the experimental results on which we base the theory. Nor are theories used in chemistry solely as instruments. They may be so used in the synthesis of new compounds, or to predict the factors that will be relevant in some new field, such as radiation chemistry. But in most chemical activities theories are of interest because they offer explanations of observations that would otherwise be puzzling. They are developed to help us understand the phenomena, not merely to describe them. The use of molecular models is a particularly clear indication of this role of theory.
In what sense do theories ‘explain’? It seems now to be generally assumed by logicians that theories explain laws in the sense that the laws can be deduced from them. Explanation in this sense means that the complex is reduced to the simple; the number of unrelated concepts is reduced. But most chemists are more satisfied with a molecular model that can be visualized than with a formal mathematical scheme, although the same conclusions may be deducible from both. They are mostly happier with a model that can be drawn on paper, or constructed of balls and springs, than with molecular-orbital calculations, although they know that the structure of benzene (for example) can be correctly deduced from the molecular orbitals. Does this perhaps mean that explanation, as N.R. Campbell suggested, requires that an analogy be drawn between the system and some more familiar system whose laws are already known? The kinetic theory of gases is a case in point; the unfamiliar laws describing the behaviour of gases are explained, says Campbell, by relating them to motion, which is very familiar. Similarly, the unfamiliar laws of chemistry might be said to be explained by the analogy with familiar mechanical models.
But this account is plausible only so long as the models are mechanical.
The modern model of a molecule does not follow the laws of macroscopic
mechanics; energy is gained or lost only in quanta, and the atoms cannot
even be assigned a precise position or velocity. When pressed, we know
that simple mechanical models do not fit the observations. Moreover the
common use of the Schrödinger equation shows that formal mathematics
may replace the imaginative manipulations of a mechanical model.
Explanation, it seems, does not depend on familiarity; indeed, it is truer
to say that the known is explained by the unknown, inasmuch as the known
is complex whereas the unknown postulated by our theories is simpler.
Models are explanations inasmuch as they embody, so to speak, the correct
equations. Our feeling of greater ease with models that we can draw on
a scrap of paper must, it seems, be concerned not with the explanatory
power of the theory, but with its other great attribute: its applicability.
A ‘good’ theory is one that can be manipulated and applied easily to new
situations; this is what determines its contribution to the extension of
a science. The atomic-molecular theory, with the exact yet flexible notation
developed for it, is exceptionally widely applicable and in consequence
We must first clarify the use of the word ‘model’ as applied to atoms and molecules. In one sense it may be used to denote a material object such as a construction made from balls and springs, of appropriate size, to represent the structure of a crystal or a molecule; with enough trouble, such a model could be made flexible to show the motion of the atoms in the molecule. But we know that molecules cannot be represented simply as small-scale versions of macroscopic objects; molecular motions follow quantum mechanics rather than classical, atoms cannot be assigned a precise location, and so on. In another sense, then, the word ‘model’ may mean our imaginative picture of the molecule or atom, in which the material object is supposed to be modified in the ways required by quantum theory; the word ‘model’ then refers to a description, an entity that is merely imagined and described, rather than to one which is perceivable. In a third sense, the word is sometimes used to denote the system of mathematical equations which may be used to give exactness to this description – the wave-equation for a hydrogen atom, for example. This mathematical structure has a life of its own, so to speak, and can be made to explain and predict just as the imaginable model can. It is not so amenable to the imagination, but it still constitutes a description. In what follows, we consider models in the sense of descriptions; that is, we exclude the first sense mentioned above, since material models are not to be taken quite literally.
The question is, then, what can be said of the status of our models as descriptions of real systems of nature. To regard them as exact descriptions would seem implausible on the face of it, since new evidence constantly leads us to modify and improve our theories, so that at a given time we can hardly suppose them to be complete. A pointer in the same direction is the fact that widely different models are sometimes found to be associated with the same equation. An interesting example is cited by Sir Edmund Whittaker: "The vibrations of a membrane which has the shape of an ellipse can be calculated by means of a differential equation known as Mathieu’s equation: but this same equation is also arrived at when we study the dynamics of a circus performer, who holds an assistant balanced on a pole while he himself stands on a spherical ball rolling on the ground. If now we imagine an observer who discovers that the future course of a certain phenomenon can be predicted by Mathieu’s equation, but who is unable for some reason to perceive the system which generates the phenomenon, then evidently he would be unable to tell whether the system in question is an elliptic membrane or a variety artiste." This lack of a unique relation between model and equation suggests that the model is not necessarily an exact description of the real system whose behaviour it simulates.
This impression is confirmed when we find that the behaviour of a given system may require different models according to circumstances. The fact that a beam of light may be treated by a particle-model in one experiment and a wave-model in another, according to the system with which it is interacting, indicates that neither model is an exact description of the light-beam. It suggests that only some, not all, of the characteristics of the models are the same as or similar to those of the reality. The light-beam has some characteristics in common with a wave travelling down a stretched string, and other characteristics in common with a projectile, but does not share all its characteristics with either. In other words, the models are analogues of the real system. (Two things are said to be analogous, in the terminology of modern logic, if they have some, but not all, characteristics in common.) Whether they agree in other respects is not known; we can only adjust the model in accordance with our evidence, and that evidence is always incomplete.
This view of models as providing analogies is confirmed if we reflect on their logical status, revealed by the way in which they are related to the observational evidence which supports them. A successful model for a given physical system is one that leads to equations that agree with the empirical laws derived from observation. But this agreement does not imply that the system is exactly like the model; only that it is like it in some respects. And this is the definition of an analogue. This conclusion might indeed have been reached on methodological grounds alone, without appeal to the physical experience summarized above. Models, then are not to be taken as exact descriptions of reality on the one hand, nor as sheer fictions on the other; they are best regarded as providing analogies.
This conclusion is extremely useful in dealing with a variety of pseudo-problems. One such was produced by a misunderstanding of the fact that the behaviour of light requires two analogies, the wave and the particle, according to the type of the experiment performed. It was supposed by some that science had led to two incompatible views about the nature of light; naturally, this caused considerably perplexity. The puzzle vanishes if we remember that the wave and the particle model should not be taken as exact descriptions of a beam of light, but as analogies for its behaviour, and that the use of different analogies for its behaviour in different circumstances is quite legitimate. Similarly if we regard the ‘luminiferous ether’ as an analogy, we are no longer puzzled by the fact that it has some of the properties of a material medium (inasmuch as it transmits waves), but not all of them, as was shown by the Michelson-Morley experiment.
We can now consider an answer to the question whether atoms exist. If
this means ‘Does anything exist corresponding exactly to the description
of an atom given by modern theory?’ the answer is ‘Probably not’. But it
is important to add that something analogous to the modern model
exists. We do not know how close the analogy is; in relation to present
knowledge it seems pretty close, but future discoveries may show that it
is as incomplete as the particle theory of light. We can, however, claim
that the analogy is improved in the light of new evidence. The contemporary
model is a closer analogue than Bohr’s, as Bohr’s was closer than Dalton’s,
and Dalton’s than Boyle’s. And this is all we can say, given the essential
incompleteness of scientific evidence.
As far as laws are concerned, this account is certainly an improvement on the naïve Baconian view, according to which we make observations at random and then extract what generalizations we can. As applied to theoretical hypotheses, the account lays stress on an essential feature of a developed science, namely, the role of theory in prediction as well as in explanation. It is certainly true that observations are often undertaken to test some theoretical hypothesis. But as applied to theories this view must be carefully handled; for it would not be true to say that chemists always, or even habitually, set out to test detailed molecular models. Let us briefly examine this assertion.
In the first place, much experimental work is done without the help of a detailed predictive theory. Before a theory of any phenomenon is formulated, there must be some body of observations, which were generally made simply because the phenomenon in question lent itself to experimental investigation and seemed likely to be of practical or theoretical interest. Such observations, when first made, constituted a challenge to theorizers, rather than a test of any existing theory. The predictive role of theory can be over-stated. Scientists have an itch to find things out, as well as to explain; they know that new phenomena may greatly increase their understanding of nature, by throwing up puzzling facts which lead to advances in theory. This is particularly obvious when a new technique is discovered, such as polarography, or chromatography, or isotope exchange; it is tried out in all directions, to see what will happen, just as Galileo tried out his new telescope. The more empirical type of investigation must not be forgotten in an account of scientific method.
In most fields of physico-chemical research, however, neither theory nor experiment has matters all its own way. The typical procedure lies, so to speak, between the empirical and the hypothetico-deductive. It is concerned to give quantitative detail to a theory – to make it exact. Chemists usually have a molecular picture in mind, but often it is not capable of giving an exact prediction, either because it is not specific enough or because the calculation would be too complex. The observations are undertaken to define the model more precisely. The choice of experiment is usually dependent upon a hypothesis of some sort, otherwise chemistry would not be systematic; but the hypothesis is usually a much vaguer affair than the molecular model – it is a guess about some new application of the model, or some improvement to it.
This is a common situation when measurements are made in chemistry, that is to say, in most fields of research other than preparative and synthetic chemistry. For example, in investigations of molecular structure by spectroscopic or diffraction methods, we presuppose the chemical composition of the system, and the number and kinds of atoms composing the molecule, and our experiments allow us to fill in the quantitative detail about the interatomic distances, angles and forces. In thermodynamic and kinetic investigations, similarly, measurements may be used to improve the molecular picture. Measurements of the conductivities of solutions, for example, throw light on the behaviour of ions; measurements of the rates of reactions throw light on their mechanisms. In such experiments we are not testing the model, which is taken for granted; we are trying to make it more precise. Experiment does not wait upon theoretical prediction; it supplies new information on its own account.
Chemistry is a developed science, with a powerful body of theory, but
it is a science in which theory is closely dependent upon experiment for
its advance. The methodology of this phase of science has been strangely
neglected. Logicians seem to have swung from a preoccupation with Mill’s
methods of induction to an obsession with the testing of theories; from
the procedure of naturalists and social scientists to that of mathematical
physicists. It is time that some intermediate – and more typical – kinds
of investigation were considered.
Relevant material abounds in the history and current practice of chemistry;
but it is seldom quoted in discussions on scientific method. The philosophy
of science, like science itself, must advance by trying out its theories
to see if they fit the facts, and amending them if they do not. This can
only be done if the facts are correctly reported. The implications for
contact between philosopher and scientist are obvious.
 See F. Sherwood Taylor, A History of Industrial Chemistry, London, 1957, pt. 1; Partington, A Short History of Chemistry, London, 1948, pp. 153 ff.; R. Hoykaas, in Centaurus, 5 (1948), pp. 307-22.
 Freund, Study of Chemical Composition. In recent years many examples of non-stoichiometric compounds have been discovered, in which the composition is variable although the binding is ‘chemical’. Iron oxide is one common example. But since the properties of such substances can be related to the composition, they do not seem to be in principle any more anomalous than solutions or alloys. For a survey see Emeleus and Anderson, Modern Aspects of Inorganic Chemistry, London, 1952, ch. 13; or R.M. Barrer, "Quelques problèmes de la chimie minérale", Tenth Solvay Conference Proceedings, 1956, pp. 21-68 (in English).
 (a) Dalton, A New System of Chemical Philosophy, Manchester, 1808-27; extracts in (b) Alembic Club Reprints, no. 2 (Edinburgh, 1899) and (c) Leicester and Klickstein, A Source Book on Chemistry, New York, 1952, pp. 208-20. For the origins of the theory of Dalton’s physical work on mixed gases, see (d) Roscoe and Harden, A New View of Daltons’s Atomic Theory, London, 1896, or L.K. Nash in Harvard Case Histories in Experimental Science, Cambridge, Mass., 1957, p. 108. For a general survey of atomic theory see (e) J.C. Gregory, A Short History of Atomism, London, 1931.
 Cf. Leicester and Klickstein, Source Book, pp. 259 ff., and Gregory, Short History, ch. 12. This was one reason why the notion of molecules containing two like atoms, such as O2, seemed unsatisfactory.
 The principle of the conservation of energy was stated by Helmholtz in 1847, and soon after by Clausius (1850) and by Thomson, later Lord Kelvin, who also stated the second law of thermodynamics (1851). (For extracts, see Magie, Source-Book in Physics, New York, 1935.) The essential equations are that which defines the energy change in a closed system as D E = q-w, where w is the work done by the system and q is the heat absorbed; and that which defines the entropy change as dS = d q/T, where T is the absolute temperature, and the change must be carried out reversibly, as explained in textbooks of thermodynamics.
 The quantitative measure of heat can be stated in terms of work, and so can that of temperature; see, for example, Zemansky, Heat and Thermodynamics, New York, 1943. Quantities of work are related ultimately to centimetres, grams and seconds.
 Strictly speaking, the fundamental concepts used in this field are (a) energy (translational, rotational, vibrational, or electronic) and (b) the ‘number of complexions’ of a system, which is related to its entropy. Both depend on the mass and the structure of the molecule. Cf. R.C. Tolman, Principles of Statistical Mechanics, Oxford, 1938; R.A. Fowler and E.A. Guggenheim, Statistical Thermodynamics, Cambridge, 1939.
 The history of the emergence of the idea of correlation is too big a subject even to summarize here. The notion becomes explicit in the thirteenth century (see Crombie, Robert Grosseteste and the Origin of Experimental Science, Oxford, 1953).
 This is the classical way in which logicians have dealt with the matter, from Grosseteste and Roger Bacon (Crombie, Grosseteste), through Francis Bacon’s Novum Organum and John Stuart Mill’s System of Logic (1843) to Keynes’s Treatise on Probability (1920), with its stress on increasing the negative analogy. The effect of diversifying the circumstances is to show up any unexpected relevant factors.
 M. Hesse, in Brit. Jour. Phil. Sci., 2 (1952), p. 287; 4 (1953), p. 198; Science and the Human Imagination, London, 1954, ch. 8. Dr. Hesse suggests that the (rather different) scholastic concept of analogy is also relevant here, in that mathematical structure cannot be predicated univocally of natural phenomena.