HYLE--International Journal for Philosophy of Chemistry, Vol. 10, No.2 (2004), pp. 83-98.
http://www.hyle.org
Copyright © 2004 by HYLE and Otávio Bueno


Special Issue on "Nanotech Challenges"



The Drexler-Smalley Debate
on Nanotechnology: Incommensurability at Work?

Otávio Bueno*

  

Abstract: In a recent debate, Eric Drexler and Richard Smalley have discussed the chemical and physical possibility of constructing molecular assemblers – devices that guide chemical reactions by placing, with atomic precision, reactive molecules. Drexler insisted on the mechanical feasibility of such assemblers, whereas Smalley resisted the idea that such devices could be chemically constructed, because we do not have the required control. Underlying the debate, there are differences regarding the appropriate goals, methods, and theories of nanotechnology, and the appropriate way of conceptualizing molecular assemblers. Not surprisingly, incommensurability emerges. In this paper, I assess the main features of the debate, the levels of the emerging incommensurability, and indicate one way in which the debate could be decided.

Keywords: nanotechnology, Drexler-Smalley debate, molecular assemblers, incommensurability.


 

1. Introduction [1]

Many debates about nanotechnology emerge from particular visions of the field. We find, for example, visions of a future dramatically changed by the new technology, with the production of materials and objects with atomic precision in a remarkably short time by self-replicating nanobots (Drexler 1986); but we also find the fear that nanotechnology will quickly run out of control, leaving us powerless behind (Joy 2000). As with most extreme views, it is unlikely that any of these scenarios is completely correct. However, particularly in less radical forms, they may capture something right about certain developments of the field.

In this paper, I examine a recent debate in nanotechnology, which was also motivated by different visions of the field. But, in this case, the different visions involved distinct ways of conceptualizing what is (and is not) feasible in the area, and even alternative standards of assessment of such feasibility judgments. In the debate, we find all the interesting features of scientific debates more generally: curious arguments (and often not unproblematic ones), powerful images, unexpected conceptual shifts, the use of diverse standards, and a good bit of rhetoric. What emerges from the exchange examined here is an interesting perspective on how scientific debates can be conducted and interpreted – and why, sometimes, it is so hard to settle them. Nanotechnology, even at the metalevel, never stops to be intriguing.

The debate involves two significant characters. On the one hand, we have Eric Drexler, one of the visionaries of nanotechnology. He clearly conceived of a world completely transformed by the developments in the area. A crucial component of his view takes central stage in the exchange below: the notion of a molecular assembler. According to Drexler, such an assembler would be able to build virtually anything with atomic precision and no pollution. His vision was first presented in the 1980s, in Engines of Creation (Drexler 1986), with the more technical details articulated later in the early 1990s, in Nanosystems (Drexler 1992). Drexler is the chairman and cofounder of the Foresight Institute, an institution that aims to help prepare society for advances in technology, with particular emphasis on nanotechnology. Drexler’s main background is in engineering, and as we will see, it is from the perspective of an engineer that he approaches nanotechnology. As will become clear below, this explains important features of his vision of the field.

On the other hand, we have Richard Smalley. University Professor of chemistry, physics, and astronomy at Rice University, Smalley was awarded the 1996 Nobel Prize in chemistry for the discovery of fullerenes. His current research is deeply immersed in nanotechnology, focusing, in particular, on the chemistry, physics, and potential applications of carbon nanotubes. With his main background in chemistry and physics, Smalley approaches nanotechnology with an eye for what can actually be implemented and controlled in the laboratory. His approach is not only informed by the relevant chemical and physical theories, but it relies deeply on the actual chemical and physical practices to determine the feasibility of proposed views.

What is the issue in the debate between Drexler and Smalley? Briefly put, the question is whether molecular assemblers are possible. As conceived of by Drexler, molecular assemblers are "devices able to guide chemical reactions by positioning reactive molecules with atomic precision" (Drexler 2003a, p. 38). More specifically, the issue is whether it is physically and chemically possible to construct such assemblers; i.e., whether the construction of a molecular assembler is compatible with accepted physical and chemical principles. Drexler claims it is.[2] In his picture, molecular assemblers are basically mechanical devices, controlled by computers to "guide the chemical synthesis of complex structures by mechanically positioning reactive molecules" (Drexler 2003a, p. 38).[3] Smalley disputes the viability of this mechanical picture, challenging the possibility of obtaining the precise control of nanophenomena presupposed by Drexler. According to Smalley, the required control cannot be had – not even in principle.
 
 

2. The debate

The debate starts with Smalley questioning Drexler’s proposal with two arguments: the so-called fat fingers and sticky fingers objections. Smalley’s point is that it is not possible to pick up and place individual atoms with the precision required by Drexler: computer-controlled ‘fingers’ will be too fat and too sticky for that (Smalley 2001). The talk of fingers in this context may seem strange, given that, literally, there are no at the nanoscale. However, as we will see, this talk plays an important rhetorical role in Smalley’s argument, which can be seen as a kind of reductio of the mechanical features of Drexler’s conception. What Smalley wants to highlight with this language is the difficulty of actually implementing Drexler’s vision, according to the standards set by Drexler himself. I will consider each argument in turn.

The fat fingers objection takes seriously the mechanical nature of Drexler’s conception of molecular assemblers, and attempts to show that the unfeasibility of the conception is ultimately due to the mechanical assumptions it requires. As we saw, for Drexler, an assembler will "mechanically [position] reactive molecules" with "atomic precision", and in this way, it will be able to "guide the chemical synthesis of complex structures" (Drexler 2003a, p. 38, italics added). What happens if we take literally the idea of mechanically locating each atom with atomic precision? This would require, according to Smalley, nanobots with manipulator arms – this is the point where the mechanical features are taken at face value. But given that the fingers of the nanobot arm must themselves be made of atoms, there would not be enough room at the nanometer scale to allow the control required to precisely locate each atom. After all, to have complete control of the chemistry, too many fingers in too many arms would be needed. And there is simply not enough room for that. In Smalley’s own words:

Because the fingers of a manipulator arm must themselves be made out of atoms, they have a certain irreducible size. There just isn’t enough room in the nanometer-size reaction region to accommodate all the fingers of all the manipulators necessary to have complete control of the chemistry. [Smalley 2001, p. 77]

According to the sticky fingers objection, the precise control over the positioning of atoms required by Drexler cannot be achieved, given that the atoms of the manipulator arms will interact with other atoms in unintended ways. Just by positioning an atom in a given place is not enough to guarantee that it will interact only with the atoms we want it to interact with. As Smalley points out:

Manipulator fingers on the hypothetical self-replicating nanobot are […] too sticky: the atoms of the manipulator hands will adhere to the atom that is being moved. So it will often be impossible to release this minuscule building block in precisely the right spot. [Smally 2001, p. 77]

With these two arguments, Smalley thinks that Drexler’s mechanical case for molecular assemblers is fundamentally flawed.

However, Smalley also raises an additional worry. In his view, Drexler needs self-replicating molecular assemblers to implement his vision; otherwise, the rate of production would be too slow. A single non-replicating assembler would take a long time to produce only a mole of something:

Imagine a single assembler: working furiously, this hypothetical nanorobot would make many new bonds as it went about its assigned task, placing perhaps up to a billion new atoms in the desired structure every second. But as fast as it is, that rate would be virtually useless in running a nanofactory: generating even a tiny amount of a product would take a solitary nanobot millions of years. (Making a mole of something – say, 30 grams, or about one ounce – would require at least 6 × 1023 bonds, one for each atom. At the frenzied rate of 109 per second it would take this nanobot 6 × 1014 seconds – that is, 1013 minutes, which is 6.9 × 109 days, or 19 million years.) [Smalley 2001, p. 76]

In contrast, self-replicating nanobots would be much more efficient. With the ability to self-reproduce, very quickly they could create a whole army of assemblers, which in turn would be able to produce things at a much faster rate.

For fun, suppose that each nanobot consisted of a billion atoms (109 atoms) in some incredibly elaborate structure. If these nanobots could be assembled at the full billion-atoms-per-second rate imagined earlier, it would take only one second for each nanobot to make a copy of itself. The new nanobot clone would then be ‘turned on’ so that it could start its own reproduction. After 60 seconds of this furious cloning, there would be 260 nanobots, which is the incredibly large number of 1 × 1018, or a billion billion. This massive army of nanobots would produce 30 grams of a product in 0.6 millisecond, or 50 kilograms per second. Now we’re talking about something very big indeed! [Smalley 2001, p. 76]

According to Smalley, the implementation of Drexler’s vision requires more than just molecular assemblers; these assemblers need to self-replicate as well.

How does Drexler respond? First, with regard to the self-replication requirement, even though Drexler himself had an important role in forming the impression that self-replication was necessary for the success of nanotechnology (Drexler 1986), things have changed on this front. Drexler has recently been developing, in collaboration with Chris Phoenix, models that do not require self-replication to implement large-scale systems of productive nanomachinery (see Drexler & Phoenix 2004). The details of these models, however, still remain to be seen.

Second, with regard to the fat fingers and the sticky fingers objections, Drexler insists, as noted above, that his assemblers do not manipulate individual atoms. They manipulate reactive molecules (Drexler 2003a, p. 38). Given that Smalley’s two main objections were based on the difficulties associated with manipulating individual atoms, they just miss the target.

In reply to Drexler’s response, Smalley formulates a second version of the fat and sticky fingers objections, extending to reactive molecules the arguments that were initially couched in terms of individual atoms:

The same argument I used to show the infeasibility of tiny fingers placing one atom at a time applies also to placing larger, more complex building blocks. Since each incoming ‘reactive molecule’ building block has multiple atoms to control during the reaction, even more fingers will be needed to make sure they do not go astray. Computer-controlled fingers will be too fat and too sticky to permit the requisite control. Fingers just can’t do chemistry with the necessary finesse. [Smalley 2003a, p. 39, italics added]

Thus, the original complaint about the unfeasibility of controlling chemical processes with the needed refinement can be easily extended to reactive molecules as well. If anything, in Smalley’s view, the second version of the ‘fingers’ objections is stronger than the first, given that the precise manipulation of a whole reactive molecule requires more ‘fingers’ to control the multitude of atoms involved than what is required by the manipulation of just a single atom. Thus, the initial difficulty comes back – now multiplied by each atom involved in the process.

In response, Drexler thoroughly rejects the talk of fingers. It is not only that this talk cannot be taken literally; there are simply no such fingers at the nanometer scale. As he points out:

Like enzymes and ribosomes, proposed assemblers neither have nor need these ‘Smalley fingers’. The task of positioning reactive molecules simply doesn’t require them. [Drexler 2003a, p. 38]

In a curious way, both Smalley and Drexler agree on the nonexistence of such ‘fingers’, albeit for very different reasons. Smalley rejects these ‘fingers’ as part of his reductio of the mechanical approach to assemblers, which he correctly takes to be Drexler’s view. Drexler, in turn, denies commitment to these (obviously nonexistent) objects as part of his attempt to defuse Smalley’s objection.

But once this is clear, we can see the significance of Smalley’s ‘fingers’ objection: it challenges Drexler to spell out not the mechanical, but the chemical processes underlying Drexler’s conception of molecular assemblers. The objection ultimately disputes the feasibility of controlling the chemical reactions that would inevitably take place if a mechanical molecular assembler were ever produced. In this way, by skillfully shifting the issue from the mechanical to the chemical domain, the objection defies the viability of Drexler’s proposal.

However, once it is agreed that there are no fingers at all at the nanoscale, Smalley raises a new challenge. If the process of placing reactive molecules does not involve fingers, and if molecular assemblers are to use enzymes and ribosomes in this process – as Drexler himself acknowledges (Drexler 2003a, p. 38) – further difficulties emerge. After all, we should now take seriously the need for describing the chemical processes involved in the implementation of a molecular assembler; in other words, the chemical details have to be articulated.[4] In particular, several points need to be spelled out. For example:

How is it that the nanobot picks just the enzyme molecule it needs out of this cell, and how does it know just how to hold it and make sure it joins with the local region where the assembly is being done, in just the right fashion? How does the nanobot know when the enzyme is damaged and needs to be replaced? How does the nanobot do error detection and error correction? [Smalley 2003a, p. 39]

Without answering questions of this sort, it is unclear how a molecular assembler – with the particular type of control and precision required by Drexler’s proposal – could actually be constructed, even in principle.

The outcome of these considerations is what can be called Smalley’s dilemma. Supposing that Drexler’s molecular assembler will use something like enzymes and ribosomes, then either the assembler is a water-based entity, or it is not. If it is a water-based entity, then it is limited in what it can achieve; for instance, it cannot produce anything that is chemically unstable in water. (And how will it then produce steel, cooper, aluminum, or titanium?) If the assembler is not water based, then the chemistry that underlies it eludes us. In Smalley’s own words:

The central problem I see with the nanobot self-assembler then is primarily chemistry. If the nanobot is restricted to be a water-based life-form, since this is the only way its molecular assembly tools will work, then there is a long list of vulnerabilities and limitations to what it can do. If it is a non-water-based life-form, then there is a vast area of chemistry that has eluded us for centuries. [Smalley 2003a, p. 40]

In either case, according to Smalley, there is trouble. The first horn seems to bring major limitations to what could be achieved by a water-based assembler (e.g. nothing that is unstable in water could then be produced). The second horn, with a non-water-based assembler, requires a chemistry whose details we may not have completely mastered yet.

Interestingly enough, Drexler’s response to the dilemma does not address any of the two horns.[5] Instead, he returns from chemistry to mechanics. Talking about Feynman’s famous 1959 talk (Feynman 1960), Drexler insists:

Although inspired by biology (where nanomachines regularly build more nanomachines despite quantum uncertainty and thermal motion), Feynman’s vision of nanotechnology is fundamentally mechanical, not biological. Molecular manufacturing concepts [that is, Drexler’s own approach] follow this lead. [Drexler 2003b, p. 40, italics added]

With the acknowledgment of Feynman, Drexler then rejects the need for accommodating the details of chemical processes that, prima facie, seem to be required for the implementation of his own vision. By emphatically putting himself back into a purely mechanical world, he denies any role for biological or strictly chemical processes in his proposal:

Nanofactories contain no enzymes, no living cells, no swarms of roaming, replicating nanobots. Instead, they use computers for digitally precise control, conveyors for parts transport, and positioning devices of assorted sizes to assemble small parts into larger parts, building macroscopic products. The smallest devices position molecular parts to assemble structures through mechanosynthesis – ‘machine-phase’ chemistry. [Drexler 2003b, p. 41, italics added]

Without a doubt, Drexler emphasizes here the mechanical features of his conception of assemblers, invoking conveyors, computers, and positioning devices to assemble structures. We are now miles away from any chemical understanding of a molecular assembler. This is perhaps the position Drexler wants to be in. Presumably, he sees it as a safe place from which to disarm Smalley’s dilemma, given that the latter does not arise for a nonchemical conception of assemblers.

This may be so, but the move has its cost too. And as Smalley does not fail to point out in his final reply, instead of exploring the chemical details that need to the articulated for Drexler’s conception to get off the ground, Drexler simply returned to his mechanical view, bringing back the same difficulties along the way. For Smalley, a purely mechanical conception of molecular assemblers is miles away from anything that could actually be implemented – even in principle – due to the unfeasibility of the required control. With noticeable disappointment, Smalley notes:

I see you have now walked out of the room where I had led you to talk about real chemistry, and you are now back in your mechanical world. […] Much like you can’t make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion along a few degrees of freedom in the assembler-fixed frame of reference. Chemistry, like love, is more subtle than that. You need to guide the reactants down a particular reaction coordinate, and this coordinate treads through a many-dimensional hyperspace. I agree you will get a reaction when a robot arm pushes the molecules together, but most of the time it won’t be the reaction you want. [Smalley 2003b, p. 41, italics added]

However, with Drexler’s return to the mechanical view, we are back to the main trouble: the level of control over reactive molecules that is presupposed by this view simply cannot be obtained. In the passage that follows, Smalley emphasizes just this point:

Chemistry of the complexity, richness, and precision needed to come anywhere close to making a molecular assembler – let alone a self-replicating assembler – cannot be done simply by mushing two molecular objects together. You need more control. There are too many atoms involved to handle in such a clumsy way. [Smalley 2003b, p. 41, italics added]

However, if a purely mechanical approach to assemblers does not quite work, what is the alternative? Not surprisingly perhaps, Smalley’s final conclusion insists on the need for returning to a chemical conception of assemblers, as a way to try to obtain, at least in part, some of the required control. As he insists:

To control these atoms you need some sort of molecular chaperone that can also serve as a catalyst. You need a fairly large group of other atoms arranged in a complex, articulated, three-dimensional way to activate the substrate and bring in the reactant, and massage the two until they react in just the desired way. You need something very much like an enzyme. [Smalley 2003b, p. 41, italics added]

In other words, to get the control Drexler needs, it is crucial to appeal to a chemical understanding of the phenomena: instead of conveyors, computers, and positioning devices, we have catalysts, reactants, and enzymes. Even then, it is not entirely obvious that one can fully implement Drexler’s overall vision. After all, chemical processes are often capricious, subtle, and delicate – in ways that repeatedly elude us.
 
 

3. A partial diagnosis: incommensurability at work?

After reviewing the main features of the debate, it is hard to resist the temptation of giving at least a partial diagnosis. Although I do not intend to be comprehensive, I want to highlight significant features that should help us understand some of the moves made above.
 

3.1. Different conceptions of molecular assemblers

First, we clearly have here two radically different approaches to molecular assemblers. On the one hand, there is Drexler’s mechanical conception, which is developed as an engineer’s (conceptual) prototype. It examines, from a mechanical point of view and purely theoretically, in what way molecular assemblers are possible, by essentially formulating a theoretical model in which the relevant physical principles are not violated. The irony is that, as an engineer, Drexler only provides theoretical artifacts, rather than physical ones. For Drexler, however, this is not at all a problem. It is simply part of his theoretical applied science project, which does not aim at providing experimental results, but develops instead only a "theoretical analysis demonstrating the possibility of a class of as-yet unrealizable devices" (Drexler 1992, p. 489, the first italic is mine). Instead of producing physical devices, the aim is to generate theoretical results. In much the same way, the aim of interpreting a physical theory (say, quantum mechanics) typically is the formulation of theoretical results regarding the possibility of certain aspects of the world (on the assumption that the theory in question is true), rather than the generation of new experimental results. The activity of interpretation may not be the most typical activity in scientific practice, but it is a significant part of it nonetheless.

On the other hand, there is Smalley’s chemical approach to molecular assemblers, which challenges the feasibility of Drexler’s mechanical conception. As a chemist, Smalley insists on the production of detectable and controllable effects, emphasizing the need for accommodating the actual, chemical details that are part of the phenomena. (This is precisely what Drexler is unwilling to do.) However, as we saw, Smalley’s challenge goes deeper, given that it disputes even the feasibility in principle of actually implementing anything like a mechanical molecular assembler, due to the difficulty of having the required control.

As a result, and very briefly put, we are faced here with a disciplinary clash (between chemistry and engineering), with different conceptions of the nature of molecular assemblers (chemical versus mechanical), and with distinct practices that may lead to their construction (effective implementation versus conceptual exploration). It is perhaps not surprising that we have hardly any agreement in the debate!
 

3.2. Different levels of incommensurability

Given the significant differences between the two approaches, the picture that emerges is one of incommensurability (see, e.g., Kuhn 1970, Feyerabend 1981, Siegel 1980, Hoyningen-Huene 1993, and Sankey 1994). After all, there are no common standards to assess the adequacy of each conception. According to the standards that Drexler set out to himself – namely, to articulate theoretical artifacts – his approach is perfectly adequate. His criteria of adequacy require only the mechanical feasibility of molecular assemblers, in the sense that the phenomena in question are not incompatible with any known physical (and perhaps chemical) principles – even though we may not have the slightest idea of how to actually implement and construct the devices under consideration. For Drexler, the process of actual construction will come later.

But we also saw that, in response to Smalley’s challenge, Drexler’s own conception seems to shift, back and forth, between mechanical and chemical representations of molecular assemblers. Due to the nature of these shifts, we clearly have here incommensurability of a conceptual nature. Drexler’s considered view, however, seems to favor the mechanical conception, which makes his proposal undoubtedly open to Smalley’s criticisms. Smalley challenges, in fact, even the feasibility in principle of such assemblers. Why?

Because Smalley criticizes the core of Drexler’s approach: the requirement of positioning reactive molecules with atomic precision. That is, Drexler demands (a) a perfect control of the position where each reactive molecule will be placed, and (b) a perfect control of the way in which a given reactive molecule will interact with other molecules. Smalley challenges both assumptions. If we were to implement anything like Drexler’s proposal in the lab, we would face insurmountable difficulties. Given the huge number of atoms present in the phenomena, we would not have the precise control to determine in which way a given reactive molecule would interact (against (b)). Thus, it would not be possible to position precisely the reactive molecule (against (a)).

Smalley, in turn, adopts a radically different conception of the nature of molecular assemblers. With his chemical conception, assemblers are subject to all the vagaries of chemical processes. And it is this conception that grounds Smalley’s criticism of Drexler’s idea of atomic precision. If the chemical factors involved in the interactions between reactive molecules are taken into account, it becomes clear that we cannot simply have the required control envisaged by the mechanical approach.

Smalley also challenges the methods used by Drexler to implement his proposal. The construction of theoretical artifacts – as the outcome of Drexler’s theoretical applied science – is not enough to establish the feasibility of molecular assemblers as Drexler conceives of them. After all, any attempt to actually implement such assemblers (for example, by trying to construct them in the lab) will immediately face trouble, given the relatively limited control that we can actually have over chemical reactions at the nanoscale.

The points just made indicate that there are at least three levels of incommensurability here: cognitive, conceptual, and methodological (see, e.g., Kuhn 1970, Laudan 1984, and Sankey 1994).[6] (i) Cognitive incommensurability emerges when there are no common standards to assess the adequacy of certain theories about the phenomena under examination. (ii) Conceptual incommensurability is the outcome of the lack of common standards to adjudicate concepts used to describe the phenomena. (iii) And finally, methodological incommensurability arises from the lack of common standards of assessment of the reliability of the different methods used. How do these levels of incommensurability bear on the present discussion?

(i) The debate here involves cognitive incommensurability in that each side adopts different theories to articulate the corresponding conception of assembler: mechanical theories in Drexler’s case and chemical theories in Smalley’s. Each of these theories is, of course, adequate in its respective domain, but given the dramatically different ways in which Drexler and Smalley conceptualize the domains (one mechanically, the other chemically), it is unclear how one could assess the overall adequacy of the theories without simply begging the question against the rival proposal.

(ii) The debate also involves conceptual incommensurability, given the radically different ways in which molecular assemblers have been conceptualized: Drexler conceives of them in basically mechanical terms, whereas Smalley is highly sensitive to the chemical features involved in the phenomena. But how could we assess the adequacy of such concepts without simply prejudging the nature of the assemblers themselves? Depending on the view of assemblers we adopt (a chemical or a mechanical view), we obtain very different answers regarding the adequacy of the concepts in question.

(iii) Finally, the debate includes methodological incommensurability as well, given that each view has a different method of articulation of molecular assemblers. Drexler’s theoretical applied science approach insists that we should first develop theoretical artifacts, establishing the theoretical possibility of such assemblers. Smalley, in turn, with a chemically grounded view, highlights the need for controllable and detectable results before we could even talk realistically about the possibility of such objects. Unless we could, in principle, develop techniques of implementation of molecular assemblers – identifying the relevant operations to be performed in the lab – it is hard to judge how such assemblers are technologically possible. The fact that a device is theoretically possible (that is, its existence does not violate any laws of physics or chemistry) is not sufficient to guarantee that we can construct that device, and hence establish that it is possible in the actual world, given our technology. Drexler agrees, of course, with the distinction between theoretical and technological possibility, and in fact, theoretical applied science often moves ahead of technology (Drexler 1992). But for Smalley, without accommodating the practical details of what actually goes on in the lab, without taking into account the technological aspects of current chemistry, we cannot claim to have established even the theoretical possibility of the devices in question. We need more than lack of inconsistency with physical and chemical principles. The technology that goes on in the lab is as much part of science as the theories that are articulated there. Given that the production of a molecular assembler crucially relies on that technology, we need to consider the latter as well.

Note that the fact that Drexler and Smalley’s views are incommensurable does not entail that they are incomparable. The absence of common standards of assessment only entails that evaluative judgments cannot be made without begging some questions, such as assuming the set of standards of one view to judge the adequacy of the other. Concepts, theories, and methods can, of course, be compared. We have been doing this all along. What may not happen is that we will be in a position to decide – without circularity – the adequacy of these concepts, theories, and methods, given the lack of a common standard of adequacy.

Why is it significant to identify the various kinds of incommensurability found in the debate between Drexler and Smalley? Because this helps to explain in which ways the debate has been inconclusive, and why it is inevitable to end up with the impression that Drexler and Smalley are simply talking past each other. With different conceptions of assemblers and with different methodological strategies to articulate such assemblers (i.e., strategies that aim to show the feasibility of such assemblers and to sketch how the latter could, in principle, be constructed), it is not surprising that there is no agreement as to how the debate could be settled. Without common standards of evaluation, or common methods of assessment and construction of assemblers, it is hard to see how to resolve this debate without simply begging the question against one side or the other.

By highlighting the incommensurability involved in the discussion, we can also understand another feature of the debate: the many layers in which it takes place. As noted above, we find not only different conceptions of molecular assemblers (chemical versus mechanical), different methods of construction or implementation of such assemblers (actual implementation versus conceptual exploration), but also, more generally, different goals for nanotechnology research – given the different visions underlying Drexler’s and Smalley’s projects. As we saw, Drexler’s vision for nanotechnology is one of atomic precision and perfect and complete control over molecular reactions. It is essentially an engineer’s vision. Smalley’s vision, in turn, insists on the production of detectable and controllable phenomena, and takes as a crucial part of scientific activity the manipulation and stabilization of the phenomena. This vision challenges the viability of a notion of control that is not grounded on what can actually be performed in the lab. It is essentially a chemist’s vision. And, as was pointed out, in each of these levels, we have incommensurability.
 

3.3. An alternative way of interpreting the debate: instruments at work

The considerations just made implicitly suggest an alternative strategy to analyze the debate between Drexler and Smalley. Perhaps with some adjustments, this alternative could provide a way to ‘settle’ the dispute without (hopefully) begging any questions.

As is well known, Larry Laudan developed a very interesting framework to assess scientific debates: the reticulated model (Laudan 1984). The idea is that scientific practice is articulated in terms of three interrelated levels: goals, methods, and theories. The level of goals involves the aims and values shared by a particular scientific community. These goals include certain ways of assessing and structuring scientific research, for example, searching for and valuing empirically testable and informative theories over mere conceptual sketches of possible experiments. The level of methods deals with methods of theory construction and theory evaluation, as well as the particular experimental strategies used to implement, control, and stabilize the phenomena. Finally, the level of theories includes the various theories and theoretical assumptions adopted by a particular community to explain and predict the phenomena.

According to this picture, scientific change involves change on at least one of the three levels, but never changes in all of them at once. Thus, we could use the ‘shared’ level (say, the level of theories) to assess the adequacy of the remaining levels (say, goals and methods), and in this way, try to settle the debate. For instance, suppose that a given community has as one of its goals to construct a machine that accelerates objects with a speed faster than that of light. But if the community also accepts a theory that states that no object could travel faster than light, this would establish the unfeasibility of the goal. Thus, the community could invoke that theory to revise the goal.

Of course, this simple model does not cover all of the crucial elements of scientific practice. We also have, at least, the level of scientific instruments (for a fascinating and sophisticated account, see Baird 2004); and instruments cannot be identified with any of the three previous levels. (i) Although theories are often invoked in the construction and manipulation of instruments (including the interpretation of the results), instruments are, of course, much more than theories, and play a significantly different role in scientific practice. For instance, instruments provide the tools in terms of which experiments are possible, allowing scientists to probe details of the physical world that would otherwise be unavailable to them. (ii) Although the use of instruments require, of course, ingenuity and technique, the skills demanded go well beyond whatever methodological rules that may be adopted in scientific practice. Learning such skills involves special requirements and abilities, such as to be able to calibrate the instrument and to distinguish artifacts of the instrument from genuine information it provides. (iii) Finally, the goals and values of instrumental practice need not be the same as those of theoretical practice, given that the former is concerned with details of the instrumental apparatus that need not be the primary concern of the latter. Thus, instruments are a crucial additional level of consideration in scientific practice.

For simplicity’s sake, let us consider scientific practice as involving certain aims, methods, theories, and instruments. Bearing this in mind, we can now return to the Drexler-Smalley debate and identify the levels in which it has been conducted. As noted above, there are differences in all of the first three levels. We have distinct aims: Drexler’s theoretical applied science project is ultimately concerned with the production of theoretical artifacts, whereas Smalley insists on the need for the construction of detectable and controllable phenomena. There are different methods: Drexler invokes theoretical exploration to establish the possibility of certain devices, whereas Smalley insists on the actual implementation of the relevant phenomena in the lab. Finally, there are different theories: Drexler’s mechanical approach to molecular assemblers emphasizes the mechanical features of the phenomena, whereas Smalley insists on the need for accommodating the relevant chemistry.[7]

Despite the disagreements at these three levels, the picture changes if we consider the fourth level, that of instruments. Here, at last, we find agreement between our authors. Both agree that the use of appropriate microscopy devices is crucial for the implementation of the phenomena in question, and necessary for the actual construction of a molecular assembler (assuming that it can be done). After all, it is through these instruments that the scientific community has the control it has over nanoscale phenomena. And it is only in terms of appropriate instruments that the community might be able to build an assembler. After all, given the size of such assemblers, the mediation of appropriate instruments is indispensable to control them.

With this minimal agreement, we can now work our way upward, and assess the debate from the point of view of instruments. Given that instruments are indispensable to the construction, stabilization, and control of phenomena at the nanoscale – and both sides of the debate agree on that – a purely theoretical approach to molecular assemblers that does not take into account the need for such instruments misses a crucial point of what needs to be accommodated. And Smalley’s insistence on the need for the production of controllable and detectable devices can be seen as an emphasis on just the need for appropriate instruments.

In this way, we see how Smalley is ultimately justified in making the requirement he makes, without begging the question against Drexler. After all, both parties share their commitment to the indispensability of appropriate instruments to control nanophenomena. Smalley, however, articulates this commitment further, introducing the requirement that detectable results should be produced as part of the determination of the possibility of molecular assemblers. After all, given that instruments are indispensable for the construction of such assemblers, to determine whether the latter are possible, it is crucial to be able, at least in principle, to produce detectable results. In this way, the overall proposal Smalley advocates seems more adequate.

Of course, this does not establish the adequacy of Smalley’s criticism of Drexler. This is a separate issue, and is open to the incommensurability charge discussed above. For, as was noted, the criticism relies on concepts, methods, and theories that are not shared by Drexler. However, the emphasis on instruments indicates one way in which the debate could be decided. After all, there is a common perspective – the commitment to the indispensability of instruments – that is shared by both sides, and from which the overall adequacy of the two proposals can be determined, without assuming points that are contentious in the debate.[8]
 
 

4. Conclusion

As we saw, the debate between Drexler and Smalley has many levels and involves a variety of moves. Given the dramatic differences in concepts, aims, theories, and methods, and the difficulty of finding common standards of assessment of them, it is understandable that we are faced with many levels of incommensurability.

However, by exploring the shared commitment to instruments – as the basic source of stable information about the phenomena under consideration – it is possible to overcome, in part, the incommensurability and decide the debate. Not in the sense of conclusively settling the issue, which is not to be had in any case. But at least in the sense of appreciating what needs to be done to carry out the visions that underlie each proposal. By identifying the crucial role that instruments play in the articulation of these visions, we also see the role these visions can play in shaping nanotechnology.
 
 

Notes

[1]  My thanks go to Rick Adams, Davis Baird, David Berube, R.I.G. Hughes, Loren Knapp, Cathy Murphy, Alfred Nordmann, Chris Robinson, Joachim Schummer, and Chris Toumey for extremely helpful discussions. An earlier version of this paper was presented at a workshop on the Drexler-Smalley debate at the University of South Carolina’s NanoCenter. I wish to thank all those who attended for their contributions, and Joachim Schummer for his encouragement and help. The material is based upon work supported by a grant from the National Science Foundation, NSF 01-157, NIRT. All opinions expressed here are mine and do not necessarily reflect those of the National Science Foundation.

[2]  Of course, Drexler has actually not constructed a molecular assembler. The question of the possibility of constructing such a device would be irrelevant if the device had already been constructed. It is enough for Drexler’s purpose to establish the theoretical possibility of such a construction, sketching how it could be performed in principle. If no known physical and chemical laws are violated in the construction, the resulting process is, at least, theoretically possible – even though we may not have the slightest idea of how to implement the process and thus actually construct the assembler.

[3]  Note that, according to Drexler, molecular assemblers will not manipulate individual atoms, but only reactive molecules. I will return to this point below.

[4]  Or, at least, before actually implementing a molecular assembler, presumably we would need to accommodate the chemical details needed in the theoretical description of the latter.

[5]  Perhaps Drexler could have challenged the second horn, noting that there have been studies of several chemical and biological processes that are not water-based. But, in this case, it might not be so clear how Drexler could still maintain the mechanical nature of his assemblers, given that the relevant work would have to be done by the appropriate chemical and biological processes.

[6]  The literature on incommensurability is, of course, huge (see, e.g., Kuhn 1970, Feyerabend 1981, Siegel 1980, Hoyningen-Huene 1993, Sankey 1994, and the references quoted in these works). But this is not the place to review it. For the purposes of this paper, I will only focus on the issues that are significant for the present debate.

[7]  This is a bit rough. Presumably, Drexler would agree on the relevance of chemical theories for his overall approach, which goes beyond his account of molecular assemblers (see Drexler 1992). However, if we focus only on Drexler’s conception of assemblers, we get a more ambivalent picture regarding the role of chemistry. As we saw in his response to Smalley, Drexler shifts back and forth between a mechanical and a more chemical understanding of assemblers. However, given that Drexler’s considered view seems to be the mechanical one, the crucial role is ultimately played by mechanical theories.

[8]  The community of chemists typically also shares Smalley’s commitment to the need for the relevant instruments as part of chemical practice. It is therefore not surprising that most members of that community will also accept Smalley’s critical assessment of Drexler’s proposal. This is expected, of course, given that the values, methods, and theories of that community are being assumed. Drexler, however, does not share them. This is another expression of the incommensurability involved in the debate.
 
 

References

Baird, D.: 2004, Thing Knowledge: A Philosophy of Scientific Instruments, University of California Press, Berkeley.

Drexler, E.: 1986, Engines of Creation: The Coming Era of Nanotechnology, Anchor Books, New York (expanded edition with a new afterword, 1990).

Drexler, E.: 1992, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, New York.

Drexler, E.: 2003a, ‘Open Letter to Richard Smalley’, Chemical & Engineering News, 81, 38-39.

Drexler, E.: 2003b, ‘Drexler Counters’, Chemical & Engineering News, 81, 40-41.

Drexler, E., Phoenix, C.: 2004, ‘Self-Replication in Nanotechnology: Feasible, Potentially Safe, and Unnecessary’ (unpublished paper, in preparation).

Feyerabend, P.: 1981, Realism, Rationalism and Scientific Method (Philosophical Papers, volume 1), Cambridge University Press, Cambridge.

Feynman, R.: 1960, ‘There’s Plenty of Room at the Bottom’, Engineering and Science, 23, 22-36.

Hoyningen-Huene, P.: 1993, Reconstructing Scientific Revolutions: Thomas S. Kuhn’s Philosophy of Science (trans. by A. Levin), University of Chicago Press, Chicago.

Joy, B.: 2000, ‘Why the Future Doesn’t Need US’, Wired, 8, 1-11.

Kuhn, T.: 1970, The Structure of Scientific Revolutions (second edition), University of Chicago Press, Chicago.

Laudan, L.: 1984, Science and Values: The Aims of Science and their Role in Scientific Debate, University of California Press, Berkeley.

Sankey, H.: 1994, The Incommensurability Thesis, Avebury, Aldershot.

Siegel, H.: 1980, ‘Objectivity, Rationality, Incommensurability, and More’, British Journal for the Philosophy of Science, 31, 359-384.

Smalley, R.: 2001, ‘Of Chemistry, Love and Nanobots’, Scientific American, 285, 76-77.

Smalley, R.: 2003a, ‘Smalley Responds’, Chemical & Engineering News, 81, 39-40.

Smalley, R.: 2003b, ‘Smalley Concludes’, Chemical & Engineering News, 81, 41-42.


Otávio Bueno:
Department of Philosophy, University of South Carolina, Columbia, SC 29208, USA; obueno@sc.edu


Copyright © 2004 by HYLE and Otávio Bueno