HYLE--International Journal for Philosophy of Chemistry, Vol. 11, No.1 (2005), pp. 45-76.
http://www.hyle.org
Copyright © 2005 by HYLE and Louis Laurent & Jean-Claude Petit


Special Issue on "Nanotech Challenges", Part II



Nanosciences and its Convergence with other Technologies

New Golden Age or Apocalypse?

 

Louis Laurent & Jean-Claude Petit*

  

Abstract: Nanosciences and nanotechnologies are developing at an incredibly rapid pace, promising a true revolution in a wide variety of fields where the capability to manipulate matter at the atomic or (supra)molecular scale is essential. This includes information processing systems, medical diagnoses and treatments, energy production and sustainable development, as well as a number of more futurist ideas that, as yet, remain pure fiction. These developments have begun to generate controversies and fears in the scientific community itself and the larger public. This article critically reviews the potential problems of an uncontrolled ‘nanoworld’ (grey goo, toxicity of nanoparticles, RFIDs, privacy, etc.) and the associated fears, as they appear in the literature. Suggestions to effectively manage controversies in this field, based on a sociological approach, are proposed.

Keywords: nanosciences, nanotechnologies, fears, sociology of science and technology, controversies.

 

1. Introduction

The end of the twentieth century witnessed a major scientific and technological development, the consequences of which are only now beginning to become apparent. Three factors – a better understanding of the properties of matter at the atomic level, progress based on the molecular approach to the way living organisms operate, and the rise of information processing – have led to the increasing unification of condensed state sciences (physics, chemistry, biology) on the nanometer scale, forming what we now know as the nanosciences. The origins of this movement are often traced to the end of 1959, the date of the founding speech by Richard Feynman ‘There is plenty of room at the bottom’ [1], made at the annual meeting of the American Physical Society at Caltech. Rather than the emergence of a fundamentally new discipline, the nanosciences can be considered the result of the convergence of various disciplines on the (supra)molecular level, or even as a new way of looking at old questions. We can imagine a future meeting of these disciplines with the science of complexity, currently the missing link between the well-controlled nanoscale objects and much ‘richer’ systems, such as Nature develops for cells and the brain. At the same time, propelled by new and ever increasing numbers of applications, the world of technology is undergoing a similar evolution. During the 1990s there was increasing awareness of the potential of hybrid applications bringing together microelectronics, biology, and information technology, particularly in the form of communicating objects, biochips, and miniature mechanical systems.

The coming-together of this group of disciplines is sometimes referred to as NBIC convergence, (for nanoscience, biology, information technology and cognitive sciences). This evolution – sometimes considered a revolution – can be seen to herald major innovations, the implications of which could in certain cases profoundly affect our way of life. All fields are concerned, and huge investments (billions of euros) have been approved in the US, Europe, and Japan. In the short term, these have been directed to sectors such as information technologies, medicine, sustainable development, and the energy sector, for all of which there are significant research programs and already products on the market.

Similar themes, exploring the long-term developments of the nanosciences, have also been taken up in literature, in books by Ray Kurzweil, Hans Moravec, and Eric Drexler, among others. These works should be regarded as seeking to stimulate long-term reflection, rather than as predictions to be taken literally. They are based on certain scientific facts (their authors having worked in the fields they explore) but for the moment are fictional accounts. They paint a picture of a society where control of manufacturing on the atomic scale enables the most extravagant ideas to be realized.

  • One of these is increasing computational capacity to the point where it is possible to create systems with a higher level of performance than the human brain, the goal being to produce autonomous machines, which may demonstrate ‘consciousness’ (the meaning of which remains to be defined), and interfaces with the human brain (to extend its capacities, or to plug our senses into a virtual reality) (Moravec 1999, Kurzweil 1999).
  • In the same way, convergence of nanotechnology with other disciplines would enable deficiencies in the human body to be repaired, influence our senses and the way our brains work in a profound way, and even improve human being. This topic is addressed in the NSF report Converging Technologies for Improving Human Performance (NSF 2002).
  • Another idea is the possibility of manipulating matter at the molecular level to produce optimized devices from which all the elements could be re-assembled, atom-by-atom, after use. The founding document for this line of reflection is the often-quoted book by Eric Drexler Engines of Creation (Drexler 1986). In this seminal work, the author spends a long time describing ‘assemblers’, nanomachines capable of manufacturing optimized products, and also of creating themselves: machines imitating living entities.

Scarcely have the promises of nanoscience been formulated in the fields for which significant progress is expected (see above), that terrifying perils are held to await us in a future that is both apocalyptic and imminent. Furthermore, it has often been the pioneers themselves, such as Eric Drexler and Bill Joy in his famous 2000 article ‘Why future doesn’t need us’ [2], who have provoked these fears at a stage where no-one – and certainly not the general public in its ignorance of nanoscience – had started to pay attention to it. It is a strange case of the Sorcerer’s Apprentice taking on the role of Cassandra.

This means that nanoscience and nanotechnology are subject to controversy before they can be said to really exist. They are expected to demonstrate real advantages while supposed negative effects are already being criticized and held to be the harbingers of veritable catastrophes. Over the past few decades, developments in science and technology have inspired ever-greater fear: nuclear technology, cloning, information technology, GMOs (see, for example, Farouki 2001) in a broader context of increasing, and apparently irreversible challenges to the traditional notion of progress. However, the question posed by nanoscience and nanotechnology is in the end perhaps not simply one more question that specialists and decision-makers have to deal with, through a new governance process, in order to continue to move forward despite the reservations (supposed, real, or emerging) of society.

Jean-Pierre Dupuy underlines another viewpoint (Dupuy 2004). The NBIC convergence implies an evolution of our representation of Nature, in particular life and cognition: while considering all the processes at the molecular level and trying to identify the ‘algorithms’ that rule theses processes, humans are tempted to simulate and then create what up to now only Nature can achieve. The evolution is accompanied by a focus on complex systems of increasing analogy with natural systems and also by a modification of the methodology. The investigation consists in a development phase followed by observation, as in the study of some complex systems like the one with distributed intelligent agents or genetic algorithms. The empirical method for which the discovery is precisely the unexpected, requires careful attention according to Dupuy. Indeed, the use of complex systems (‘mock up’ of living or thinking objects) could result in unexpected effects which cannot be reduced to a probability distribution.

The contrast between the flood of technological marvels promised in the relatively short term (happiness tomorrow, just invest a few billion euros!) and the irreversible catastrophes forecast (this time, it really is the end of the world!) ought to lead us to consider: what is at stake in nanoscience and nanotechnology, what risks have already been identified, and what measures do we need to take to be prepared.

In a field which is characterized by exceptional diversity, in terms of both scientific and technical results, and of positions adopted in the debate by actors from very different backgrounds, we wish to examine the various ingredients of the controversy, to stimulate reflection on the part of the scientific and technical community and, by extension, of all those who are starting to be concerned by this question. In Section 2, we briefly review the history of the controversy and discuss four examples that illustrate the variety of themes.

We may consider that these questions belong to a considerably vaster debate, bound up with the notion of progress. Under the sign of progress, scientists, engineers, and industrial corporations are quick to place themselves when coming up with technological innovations, while the benefits of progress are strongly contested by other groups who, with the same degree of sincerity as the scientists, try to warn us of the possible negative effects of nanoscience and nanotechnology. In Section 3, we will propose a typology of these fears according to three fundamental themes around which they seem to revolve. We will show that these themes, which are generally associated with fear of science and technology, are profoundly rooted in the Judeo-Christian tradition.

This suggests that, on the one hand, a range of responses must be proposed to tackle these questions of different types, and that, on the other, it would be unproductive to address these issues from a purely scientific viewpoint. In Section 4, we will try to identify some practical solutions that could lead to a better manner of responding to these questions.


2. Nanosciences and its convergence with other technologies: doubts set in

2.1 First opposition

Several writers were quick to signal the potential risks associated with nanoscience and nanotechnology. In his book Engines of Creation, at the beginning of the chapter ‘Engines of Destruction’, Eric Drexler mentions the potential danger of his assemblers: "unless we learn to live with them in safety, our future will likely to be both exciting and short" (Drexler 1986, p. 171). The most vivid image of fear related to nanotechnology is undoubtedly ‘grey goo’. The original premise is that one day we may be able to manufacture nanometer-sized machines capable of working on the atomic scale. ‘Grey goo’ is a mass of such machines that, having become independent, could cause damage to the human race or even devour everything in their quest to reproduce, including the earth’s crust. This last scenario is sometimes called ecophagy.

For some years now, the rising status of nanotechnology has been accompanied by some publications and debates about possible consequences, like ecophagy or the development of weapons of mass destruction [3]. The year 2003 does in fact stand as a turning point where this debate, which until recently had taken place mainly in private, came to involve a growing number of people. Publicity and the excessive or even utopian promises accompanying research contributed to bringing the question to a head for many. Three events occurring in a short space of time then seem to have provided the trigger.

First there was Michael Crichton’s novel Prey, published in November 2002 (Crichton 2002). The plot of this novel concerns a company specialized in nanotechnology, which makes nanorobots intended to fly in a swarm to form a virtual camera. Interestingly, the systems used by this company for its production are hybrids of bacteria and nanomachines. The inventors then lose control of their invention. This book was a big success and, even if this was not the aim of the author, it is often cited as revealing the concerns that nanoscience can provoke.

Soon after, in January 2003, came the publication by the ETC group of long and virulent manifestos warning of the dangers of nanotechnology, which they call ‘atomtechnology’. The principal message of this group is the need for a moratorium in the manufacturing of nanotechnology based products to first understand their effects on the environment and living organisms. In The Bigdown (ETC 2003a), the group relates the dangers associated with nanotechnology, the development of which is described in four stages (of which the first two correspond to the current situation or immediate future): nanomaterials; manipulation of nano-objects to carry out assemblies with precise positioning; the creation of factories or nanorobots working on the molecular level; and finally, convergence with living organisms. With regard to the nanoparticles generated by this industry, the ETC group mentions their possible accumulation in the organism, their potential toxic effects (with reference to asbestos), and their ability to find their way anywhere, including the food chain. They also mention long-term risks such as ‘grey goo’, and the possibility of creating unknown materials that may in some way be ‘anti-Nature’. In the report entitled Green Goo: Nanobiotechnology Comes Alive (ETC 2003b) the ETC group takes up the crossover between nanotechnology and biotechnology. The convergence is discussed by focusing on the catastrophic scenarios that it could generate – such as ‘green goo’, a group of artificial organisms produced by biotechnology that go out of control.

The third event was the position adopted by Prince Charles in April 2003, which generated considerable media attention (Highfield 2003, Radford 2003). The Prince asked British scientists to consider the "enormous environmental and social" (Radford 2003) caused by nanotechnology, alluding in particular to grey goo. The speech provoked strong reactions in both political and scientific circles. Responding to these reactions, the British government commissioned the Royal Society and the Royal Academy of Technology to carry out research on nanotechnology, including its potential benefits and risks (Royal Society 2004).

These three events set off a chain of subsequent reactions. The Greens European Free Alliance group in the European Parliament raised the question and organized a special day on the subject in Brussels on 11 June, 2003, where associations such as ETC and Greenpeace were invited to present their views. Certain Green members of Parliament, such as Caroline Lucas, have openly manifested their opposition to the risks associated with the development of nanoscience in the absence of regulation (Lucas 2003). Also worth mentioning is the large report Future Technologies, Today’s Choices submitted by Greenpeace in July 2003, which deals with both artificial intelligence and the nanosciences (Greenpeace 2003). The document presents a balanced picture of the situation, discussing both the advantages and disadvantages of nanotechnology.

This triggered a significant and growing reaction from various bodies. In 2004, various reports have been released that deal with topics, such as the general impact of nanoscience, toxic effects, and consequence of convergence (Swiss Re 2004, Sanco 2004, EHS 2004, nanoforum 2003, CTEKS 2004). In addition, there is a significant increase of publications about possible toxic effects of nanoparticles. Also Prince Charles (2004) referred to his previous statement about nanoscience and argued that the media much exaggerated his position.

To illustrate the variety of questions that are raised we discuss four issues in more detail.

2.2 Grey and green goo

What stands out clearly is the grey goo ‘fad’ – the term is used by some as a catchword to attract the attention of readers before moving on to other dangers such as the toxicity of nanomaterials, and is mentioned by others, though rarely, as a real danger. The starting assumption is that in future we will be able to create nanomachines that can manipulate matter at the molecular scale to make new products. Often this is associated with what Drexler calls ‘exponential fabrication’, i.e. when nanomachines are able to duplicate themselves. There are in fact two ways of addressing the issue.

The first is to consider biology. Cells in fact provide a number of examples of organelles, systems that work on the molecular level, for example to propel, supply energy, synthesize, repair, and duplicate. Well before molecular biology existed, empirical knowledge of living organisms was used to produce materials (wood, wool, cotton, silk, leather, paper, etc.), and to manufacture food, or modify it (alcoholic fermentation, bread-making, cheese-making, etc.). Since the 1970s, we have been able to influence the genetic machinery to produce new, modified organisms. Some molecules, such as insulin, are now manufactured using genetically modified organisms.[4] We are a long way from being in control of the way living organisms operate, but we have been using it for a very long time. A point of note is that, as George Whitesides (2001) makes clear, the ‘green goo’ scenario – the biological equivalent of grey goo – has already taken place on the planetary level (in our favor!). The earth used to be a mineral world with a carbon dioxide atmosphere, but life profoundly modified this environment, completely transforming the soil, atmosphere, and climate.

The second, more general approach is that of Eric Drexler, who argues that the existence of living organisms is a proof of the feasibility of nano-industry, and often backs up his reasoning with references to biology. At the same time, he argues that natural evolution does not enable radically different systems that are not based on proteins and DNA, whereas other systems, perhaps on a different chemical basis, are conceivable and may have, for example, less constraints regarding temperature. The scientific community is working on understanding the properties of nanometric objects and on developing devices for information processing and other actions on the nanoscale. However, we are still a long away from the grey goo scenario and there are even discussions on the feasibility itself (Smalley 2001, 2004). There is a fundamental difference between these achievements and microorganisms or assemblers as they might be imagined: the degree of complexity. Nature achieved this through a long evolutionary process, and the way in which life forms operate is of such incredible complexity that it exceeds that of all other machines created by man. Past and planned projects remain incomparably more simple than those supplied by living organisms, and it is hard to imagine how they could give rise to a ‘parallel biology’, i.e. objects capable of reproducing and acting according to complex scenarios.

For the longer term, there is no scientifically grounded answer to the question ‘Will it one day be possible to create nanorobots from scratch?’ Responses to this question range from casting doubt on the seriousness of the author to saying ‘The question is not whether it is possible but when’.

Presently, the debate tends to deal with more realistic topics and should evolve along two trends according to the time scale.

  • Following a recent paper by Drexler and Phoenix (2004), there is a much lower barrier to the achievement of non-replicating nanomachines "as this is the case for macroscopical devices". Thus, the most likely medium term scenario is production of nanomachines that can fulfill a single task (nanomedicine, fabrication, depollution, weapon, etc.) without duplication.
  • In the long term, nanotechnological convergence could lead to far greater control of the behavior of the cell on the molecular level: synthesis of different elements, manufacturing of parts of hybrid cells (living-artificial), deeply modifying life (synthetic biology), etc.

2.3 Nanomaterials and nanoparticles

While the grey goo story is often used as a dramatic symbol, the risk most often mentioned in the nanoscience field is the commercialization of nanomaterials or harmful components that could ‘crumble’ during their use or finally degrade in the environment. Certain ‘crumbs’, nanometric in size, could build up in the environment without degrading, disturbing ecosystems or even having toxic effects on humans. Claims are often made about either the indestructibility of certain types or, on the contrary, their extreme reactivity, their capacity to adsorb and transport dangerous molecules, and their extreme mobility. As discussed earlier, the most extreme positions go as far as to demand a moratorium on nanomaterials pending a better understanding of their behavior.

On the one hand, being ‘nano’ is not enough to make a product dangerous. Materials structured at the nanometric scale or nanoparticles are in no sense a new or strange type of product created by a new high-tech industry. Indeed wood, natural textiles and many other products belong to this category. Loose nanoparticles are not unknown to us either. Nature (sprays, volcano ash, desert dust), industry (carbon black, titanium dioxide) generates large amounts of ultra-fine particles (millions of tons a year). In a way, all combustion processes are nanotechnological! In an urban atmosphere, for example, there are typically between 10 and 20 million particles in the range <100nm per liter air, which represents between 1 and 2 nanograms of matter (Oberdörster 2002). Establishing a moratorium on nanomaterials, as the ETC group demands, would be difficult since, strictly applied, it would affect many products currently on sale.

On the other hand, this reasoning alone is no basis for blind optimism. Firstly, we have historical examples of mass-marketed products that, although providing many advantages, turned out to be harmful: such as asbestos and DDT. Moreover, the fact that the environment is littered with traces of by-products of products we use, shows that any decision on mass-production has consequences. Furthermore, there are growing reasons to believe that certain nanoparticles may have a detrimental effect. For example, recent work on the toxicity of nanotubes (e.g. Service 2004) clearly shows harmful effects on rats and mice, which seem to be due to the indestructibility of the nanotubes in the lung (formation of granulomas). It is also claimed that unlike natural nanoparticles artificial ones are engineered to be more active and highly dispersible and thus possibly more harmful. While it is too early to extrapolate such results to indicate toxicity for humans, they do clearly show that research must be carried out.

The various reports mentioned in Section 2.2 conclude, among others:

  • Nanometric particles have indeed properties that may differ from the one of the bulk material.
  • The importance of carrying out additional work on toxicology, as it is not possible to predict the properties of these particles on the basis of those from materials of greater mass, and of establishing standards and procedures.
  • The fact that this also concerns ‘traditional’ particles, such as those generated by combustion.

The key question is: How can a product be labeled as potentially dangerous on account of the nanoparticles that it might throw off into the environment during its life-cycle? To answer it, we need to know the physical and chemical properties of the material, how emitted nanoparticles will evolve in the atmosphere, and the behavior of these particles in the organism (penetration channels, elimination mechanisms, pathogenic effects). This research topic will certainly greatly expand in the years to come, and will probably teach us some surprising things about familiar products. It is likely that it will even cast a new light on the issue of urban pollution.

2.4 Privacy and chips

For the past three decades, electronics and information technology have continually advanced, and costs have fallen considerably. This progress has led to questions being addressed ever more urgently about the growing risk of an individual losing control of information about his or her private life, where such data is digitized, transmitted, and stored with new possiblities made available for information processing from several interconnected sources. Nanotechnology, while not the only technology at issue, potentially plays an important role insofar as it enables the development of new sensors, miniaturization, the possible design of systems with low energy consumption (hence autonomous), and increases processing power.

A particularly important example is the development of RFIDs (Radio Frequency Identification Devices) that contain a transmitter and logical circuits. When queried, they can transmit information, often an ‘electronic product code’ with enough bits to identify every individual object manufactured in the world. In their passive form, these objects do not require batteries. Their range depends on the frequency and varies from a few centimeters to about twenty meters for passive systems, while the range is much longer for systems with a power supply. Their size, which has tended to be measured in millimeters, has been reduced to the sub-millimeter scale in the most recent examples. These devices were perfected during the 1970s and have gradually been implemented in a series of contexts such as access systems (badges, toll-booths) and short-range identification (goods in stock, anti-theft, identification of animals). The unit price of the devices is still in the 10 cents to a few Euros range, but prices are expected to fall in the next few years, making RFIDs hardly more expensive than a label. They would seem to have limitless potential for use as they provide considerable advantages: stock monitoring systems in companies; objects capable of informing their environment of their presence; authentication systems (access badges, means of payment, etc.). Moreover, RFIDs are only the first generation of communicating systems. There is much room for further development, for instance, by adding local computing power, sensors, and actuators, like the systems originally developed by Kris Pister at Berkeley and commercialized by DUST Inc.[5]

However, opposition has already been formed to limit the use of RFIDs, including CASPIAN (Consumers Against Supermarket Privacy Identification and Numbering).[6] At the end of 2003, about thirty US associations wrote a manifesto on limiting the use of RFIDs.[7] This manifesto poses various questions that may be summarized in two points.

  • RFIDs can easily be hidden and, as long as they are active, they provide information on the person carrying them, including the objects and how much money the person carries.
  • Unique identification means that an object is unambiguously identified. This enables information to be cross-referenced. The most obvious example is checking against the identity of the person carrying the object (his bank card, for example), but more subtle combinations are possible using apparently insignificant information.

Associations generally propose that the use of RFIDs should be regulated, including clear labels of products containing them, full disclosure of their specifications and purpose and of the information they are carrying, a limit for data and the possibility of cross-referencing, or even the possibility of removing the RFID. Defenders of the technology point out their limited range, the ease with which the emitting signals can be stopped, the fact that the supervision of these objects ceases at the door of a shop. However, distrust has been fuelled by a series of semi-official tests carried out (or planned to be carried out) on consumers, which led to CASPIAN launching boycotts, upon which the companies involved scaled back their projects.

Recently, discussions have been started on the contexts in which RFIDs should not be used, including the first workshop on privacy and RFIDs organized by MIT on 15 November 2003.[8] There are debates on the acceptability of this technology and on technical counter-measures such as ‘killing’, a sort of triggered apoptosis of RFIDs. Regulatory authorities in charge of privacy protection are also considering this topic. They met in 2003 in Sidney and published a common statement.[9]

The basis for fair use of RFID are more or less set, consisting in a balance between taking benefit from RFID technology and privacy right. However, the implementation has costs and enforcement control may not be easy. The debate is now evolving towards a more or less organized confrontation between consumer organizations, consumers who are less concerned about RFIDs, retailers, and regulation authorities.

2.5 Human implants

A technique exists for implanting RFIDs or ‘smart dust’ in the human body. This is already routinely done to identify pets, and could easily be extended to humans. Tests have already been made with volunteers, including a Florida family in March 2002 and a Miami journalist in April 2003.[10] More recently this technique has been used in a Spanish nightclub and a Mexican administration.[11] In 2004, an estimated one thousand people were implanted. The product used is the Verichip™ by the company Applied Digital Solution (ADS), which also sells the Digital Angel device (not yet implantable) that interfaces with the GPS network to locate its bearer.[12] These systems have a number of potential applications:

  • Marking individuals for surveillance purposes. For example, an anti-kidnapping system has already been proposed by the SOLUSAT company in Mexico,[13] a country in which the disappearance of children is a serious problem. Another use is the medical monitoring of patients for whom hospitalization is not necessary, e.g. Alzheimer’s disease;
  • Means of payment. The company ADSX offers the Veripay™ system to enable secure payments similar to a chip card, but the chip is implanted beneath the skin.[14]
  • Implanted chips, which cannot be lost or easily stolen like badges, could be used as a means of access to secure premises, such that access is permitted only when the system recognizes the chip signal.

Such systems have provoked strong reactions. The first reason for this is concerns about where the technique could lead. All sorts of individuals could potentially be kept under surveillance this way. Another consideration is the religious aspect. There are currently a number of websites that refer to these devices as the "mark of the beast" in reference to the Book of Revelation (13:11, 16, 17).

Then I saw another beast coming up out of the earth, and he had two horns like a lamb and spoke like a dragon [...]. He causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, and that no one may buy or sell except one who has the mark or the name of the beast, or the number of his name.

This quote shows us that the fears generated by these new technologies trigger emotions that may be deep-seated in the human psyche, particularly the reservoir of symbols, images, and archetypes linked to the sacred.


3. Progress on trial

3.1 An evolving conception

Scientific and technical progress is traditionally considered a factor that improves our quality of life, in particular when it leads to the development of new products and services that meet society’s expectations. Good examples of this are medicine and environmental protection. In a more general way, we tend to see scientific and technical progress as one of the major factors influencing the development and competitiveness of modern economies. This role is likely to increase in the future with the advent of the knowledge society, in which the capacity for innovation becomes a strategic element for both companies and countries. In this context, nanotechnology and biotechnology are set to be at the heart of a new high value-added industry, the practical implications of which extend to a large number of fields. For nanotechnology alone, the size of the potential market is measured in thousands of billions of euros per year (Roco 2001). Such are the considerations that have prompted the current race between the major trading blocks of Europe, the US and Asia to invest in this field of research.

Co-existing with this positive and widely held view of scientific and technical progress is a growing challenge to the broader philosophical and sociological concept of progress. Wagar (1969), on whose views we draw here, has pointed out that progress is a secularized religious idea, the origin of which can be found in a linear conception of time whose basis in the West is Christian theology, notably that of Saint Augustine who, in addition, insisted on the subjective conception of time. According to that idea the whole of human history can be interpreted as the fulfillment of God’s design: the upward movement of humanity towards its creator, seen as the Golden Age. This conception is radically opposed to that of ‘traditional societies’ for which the golden age is situated at the origin of the world, where the passing of time can only result in degradation and corruption of the primitive state. The notion of progress began to be contested, implicitly at least, by the Romantic movement in the 19th century, which exulted Nature. However, it is only in the 20th century that rationality, its avatars science and technology and finally progress itself, are explicitly challenged and even put on trial (Van Doren 1967). This process was also marked by the realization that progress had none of the characteristics traditionally attributed to it: neither universal, nor continuous, nor necessary, nor unambiguous, nor linear, nor cumulative as the scientists claimed. On the contrary, authors such as Lessing, Levy-Strauss, Popper, etc. stressed its local, discontinuous, and non-linear nature. The paths taken by progress are multiple, complex, and often unpredictable.

Can we still believe in progress – a progress that has become much of a paradox (Easterbrook 2003)? Regarding the progress of scientific and technical knowledge, everyone would agree on the explosion of ideas since the beginning of the 20th century and the many positive consequences, impossible to imagine a century or even a few decades ago. However, regarding material, economic, social, or moral and spiritual progress, the answer is more ambiguous. In particular, it is the mechanical link between knowledge, wealth, and happiness that has been contested. Negative effects, the ‘damage of progress’, are increasingly visible on both the local and global scale – witness the controversy surrounding the greenhouse effect. Beyond this, there is a growing, legitimate sense that ‘we no longer control the control’, to borrow a favorite expression of Etienne Klein, of phenomena and forces that science has enabled us to understand. The inextricable complexity of the real is imposed on us, with its corollary risk, as an irreducible component of human action. Finally, since Sorel in 1906 (Illusions of Progress), political debate on how the positive effects of progress are divided up has been a recurrent theme of writers and social movements. Initially proposed by Marxists, this issue has been taken up by the anti-globalization movement.

However, while debate and objectivity are always legitimate and often necessary, it is important to avoid ‘throwing out the baby with the bathwater’. We should resist the temptation to minimize the real contribution of science and technology – and in so doing, dashing the considerable hopes that they still justify, for example in the medical field – by holding them to account for consequences for which they are not necessarily responsible. Who would be ready, on a personal level, to turn his back on science and technology?

3.2 Science and risk: towards a sociological approach

Sociologists of science and technology have proposed different models, according to the school to which one refers, for interpreting the evolution of society. One of the most productive, and perhaps the best suited to the situation of nanoscience, may be that of the great German sociologist Ulrich Beck (2001), who investigated what he calls the ‘risk society’. According to Beck, modern society is in the process of moving towards a new type of society in which risk and management play a central role. This is a ‘reflexive’ society, the operating patterns of which are still emerging. Among the elements that characterize it, we can say – without laying claim to an exhaustive and in-depth analysis of Beck’s concepts – is the fact that threats have become internal. They essentially result, not from risks linked to Nature, but from the very activities of human beings, hence the link with the fundamental themes around which our fears revolve. Knowledge, perfect technical mastery, decision-making processes – everything, or nearly everything, now contains risk, says Beck. Moreover, boundless belief and confidence in science and technology, the supposed source of inevitable progress and a mainstay of the science period, has now given way to a more modest conception. ‘Science in action’, to borrow the expression of Bruno Latour, has a more local, context-based character, which is accompanied by the legitimate uncertainties and doubts of a reflexive society. Finally, representative democracy, founded on the philosophical principles of Montesquieu and Locke, would gradually be replaced – even if it is obviously still the institutional model of our countries – by a deliberating democracy whose theorist would be the contemporary German philosopher Habermas.

We are now at the other end of the spectrum from the optimism of the Enlightenment, science no longer being the guarantor of progress. Now it is Nature’s turn to lend reassurance, whereas to our ancestors this same Nature seemed an implacable force, whose ‘master and possessor’ (in the words of Descartes) they sought to be. From now on, science makes us nervous, and we are less and less convinced that technical performance has made us more free and more happy (Easterbrook 2003). Even more, Martin Rees (2003) depicts various disasters that could be brought about by science.

Given that this new situation can lead to stasis or even to rejection, it is worth seeking to understand the phenomenon. However, there are always, in varying doses, three basic components pointing back to fundamental themes, which can be compared to Jungian archetypes, around which all fears linked to science and technology seem to revolve (see Figure 1 and Farouki 2001). These three themes are closely linked and may, in certain cases, be intertwined. The only aim we have in dealing with them separately, as we do here, is to clarify the form they take and produce an analytical scheme to be used later. We will describe the form of these fundamental themes by first characterizing certain fears that are traditionally associated with them, then identifying the link that can be made with nanoscience and nanotechnology, before mentioning their link with tradition, particularly the Judeo-Christian tradition. Analyzing these fears in this manner in no way implies that they are illegitimate or discredited. While anxiety is, for psychologists, without objective cause and foundation, fear on the other hand is always rooted in a certain reality. Although the numerous predictions of the end of the world made on the occasion of previous scientific developments have up to now been without basis, history also shows that certain fears may be confirmed by experience. Examples are Chernobyl, or the deliberate dissemination of non-degradable products that turned out to be dangerous, such as asbestos and DDT. It should also be pointed out that fear itself can be useful in the sense that, as Hans Jonas (1990) convincingly argues, it can serve as an alarm bell alerting society to consider the problem, identify the exact nature of the risks, develop a research program if necessary, and take the necessary prevention measures. Jonas describes this process as the ‘heuristics of fear’, and considers it a positive contribution on the socio-political level.

3.3 First type of fear: Loss of control

The first theme is that of an experiment that goes wrong, or a product that after commercialization provokes irreversible negative consequences, going as far as the extinction of the human race or even the disappearance of the planet. There are three scenarios:

  • A sudden event that leaves no time to react. In this case irreversibility is due to the strength of the forces unleashed over which the scientists lose control. This scenario applies in particular to processes that use high-energy sources, such as the nuclear industry or particle physics.
  • Control can be lost because there is no possibility of reacting. The typical case is the dissemination of products that turn out to be harmful. Irreversibility here comes from their long life span or their ability to reproduce. In the present context, the main concern is the dispersal of fragments of nanomaterials. This situation draws credibility from the fact that it has already happened with industrial products, such as the so-called phytosanitary substances (insecticides, fungicides, etc.) that have been used on a large scale in intensive agriculture since the middle of the 20th century. Entities capable of reproduction – all the more disturbing in that stopping release at the source is not sufficient to stabilize the situation – are relevant to living substances (micro-organisms, DNA of GMOs).
  • In addition to these ‘extreme’ examples of loss of control, we should consider ‘chronic’ cases such as pollution, changes of the ozone layer, or the accumulation of greenhouse gases. Here, the products sufficiently benefit a group of individuals, either of a geographical region or a particular generation, such that the situation continues because the people who benefit do not always perceive the disadvantages. If the perception of benefits and negative effects differs, the debate will focus as much on the evaluation of the advantages and disadvantages of the technology as on the injustice of the way risks are shared. Before the publication of the IPCC reports,[15] which pointed the way to an international scientific consensus, this was the case with the debate on greenhouse gases. The irreversibility of the situation is no longer linked solely to a given technology, which could simply be replaced to avoid the problems and associated risks. It is linked to the way ‘human society’ works in the broadest sense, whether this is due to economic forces or to the balance of power between countries. The solution can only lie in changing the mechanisms that regulate human society as a whole on which – since these are mainly international treaties – it is difficult find a consensus (e.g. agreement on CFCs or Kyoto protocol).

This loss of control concerns or panics certain people to such a degree because they believe it can provoke considerable upheavals, even the end of the world. In the traditional imagery of the West, this fear is crystallized around the notion of the Apocalypse.

The Apocalypse (etymology: unveiling, revelation – hence the Book of Revelation) is a fundamental theme of Judeo-Christian eschatology (Cohn 1983). It is based on a very specific view of time and the meaning of history, not shared by Eastern religion, such as Buddhism. In particular, its linear conception of time is an Augustinian notion that bourgeois society allowed to flourish in the nineteenth century and which is strongly linked to the notion of progress. The return of the golden age, at the end of the ‘Adamic’ cycle of humanity, is to be preceded by a period where fire and blood rain down on the earth in order to chase away the forces of darkness once and forever. The Book of Revelation is the key example of this type of literature. The theme has therefore been linked to God’s judgment (or the Final Judgment, see below) for 2000 years.

In the course of the 20th century, decline in religious belief in the West has been balanced by the growing notion that humanity might itself provoke this Apocalypse, using the ever more powerful ‘arms’ provided by science and technology. A number of novels take up this theme of a worldwide catastrophe provoked by humans. At the beginning of the twentieth century the theme of the Apocalypse was still related to natural disasters (volcanoes: Krakatoa (1883), La Montagne Pelée (1902); earthquakes: San Francisco (1906), Valparaiso and Messina (1908); rising waters causing a new flood; demography, with the yellow peril, etc.). Now, the fear of the end of civilization is justified by the self-destructive capabilities, supposedly uncontrollable, and placed at the disposal of humanity (Boia 1989). In 1929, however, the theme of the Apocalypse started to tap into other sources. That year, the New York Times informed its readers of the theories of eminent scientists who believed that the entire universe could accidentally flare up like a gunpowder fuse. This fear was then strengthened after the discovery, shortly before World War II, of the uranium fission reaction. There was fear that a chain reaction triggered experimentally, e.g. an atomic bomb, could run across the entire world. Famous scientists such as Langevin had to intervene to calm people’s minds (Weart 1998).

3.4 Second type of risk: Abuse of discoveries

Even if an innovation presents no risk of loss of control in one of the ways mentioned above, it may have serious consequences or turn out to be harmful if it is used in a manner that was not foreseen, particularly in the hands of ill-intentioned parties. There are several levels of concern about such abuse, depending on the person to whom the evil intent is ascribed.

The first case is an individual to whom a new product or technology, diverted from its intended use, gives increased power to cause harm. Obvious examples are the appearance of new types of criminal use of new technologies including terrorism. The distribution of strains of anthrax at the end of 2001 in the US is a good example of this; the use of Sarin gas in the Tokyo underground in 1995 by the Japanese sect Aum Shinrikyo, which, incidentally, claimed in this way to be triggering the Apocalypse, is another. Such technologies are relatively sophisticated, requiring the collaboration of at least some highly competent specialists, which makes it is easier for the security services to investigate.

A group of individuals, a private company, or a government may also use a new technology to gradually change the conditions of life for everyone. This could be a totalitarian country imposing certain practices to ensure the subservience of its subjects, a theme literature has exploited on several occasions (Brave New World by Aldous Huxley; 1984 by George Orwell). But reality may be much more subtle and banal. The geopolitical situation today may lead to a particular surveillance or control system being adapted in a fully democratic manner to counter risks that are deemed intolerable, such as certain forms of criminal behavior or terrorism. Also, a new technology might be promoted in the name of moral aims, such as feeding the population and fighting hunger and malnutrition, as has been done, for instance, to promote GMO’s.

The theme that underlies all the fears, and which is explicitly structured around the notions of good and evil, is that of the Sorcerer’s Apprentice.

The Sorcerer’s Apprentice is a classic literary theme, often featuring in works of a fantastic nature. The central idea is that a scientist, free from all moral scruples, exploits the natural forces he has discovered to ends that are not exclusively good, betraying the implicit mandate he has from society to carry out his research. Not only is the Sorcerer’s Apprentice shown to be irresponsible, at a certain point he loses control of the forces he has unleashed. Moreover, the scientific and technological resources at his disposal mean that he is capable of triggering the Apocalypse himself (see above), such that it is no longer seen as human fate imposed by the will of God. The ill-fated action of the scientist may even be deliberate. Popular characters such as Dr. Faust symbolize the mad scientist who has, so to speak, entered into a pact with the devil. Great writers, such as J.W. Goethe, H.G. Wells, M. Shelley, Th. Mann, and A. France, have regularly used the theme that is present since the fifteenth century and which inspired also theatre and opera. More recently, starting in the late 1970s, the Sorcerer’s Apprentice has been associated also with the biologist who is able to manipulate life itself.

After the discoveries related to the atom, it is the reign of biology and the possibility of genetic manipulation that have led to the re-emergence of the myth of the superman. ‘Progress’ seems threatening: will the Sorcerers’ Apprentices stop in time (Rifkin & Howard 1977)? The debate developed during the 1980s on the ethical level, leading scientists such as Testard in France to stop their research on their own initiative.

3.5 Third type of risk: Transgression

Developments in science and technology may also provoke reactions such as ‘it’s going too far’ or ‘somebody is trying to play God’. Everyone has their own, personal definition of the limits that humans should not exceed, whether or not this is based on a sacred view of the world. This definition draws on a mixed set of elements in which everyone finds their own meaning: scientific knowledge, precedents, cultural myths, and personal religious beliefs. These reactions, if it is felt that a transgression has taken place, may be violent even if there is no immediate danger. If these acts show a degree of uncertainty with regard to their consequences, the perception of risk may be boosted by the only partly conscious idea of ‘divine punishment’.

A typical example is an experiment that allows doing what has never been done before, which in some way is a transgression in itself. There are numerous precedents, and few directors of new experimental installations could do without refuting apocalyptic scenarios. For instance, the Tokamak TFR was built at the beginning of the 70s at the French Atomic Energy Commission (CEA) in Fontenay-aux-Roses, France, to study thermonuclear fusion, a machine that was then the most powerful in the world. Some opponents of the project were afraid that the hot plasma from this machine might be the source of intense electric fields that would cause a catastrophe.

One of the most recent cases is the Relativistic Heavy Ion Collider at Brookhaven in the USA. The purpose of this collider is to study frontal collisions at very high energy between heavy ions, heating them up to temperatures close to those that existed a few fractions of a second after the big bang. Two scenarios went around the world. The first predicted the appearance of a black hole in the interaction zone that would swallow up the entire planet. The other scenario was the appearance of ‘strange’ particles (with reference to strangeness, a property of certain quarks) that would swallow the earth atom by atom. A scientific panel was set up to try to provide rational responses to such concerns.[16]

However, the cases that seem to have the most resonance, both on the emotional level and in terms of the ethical debate they trigger, relate to progress in biotechnology. This technology does in fact pose a potential challenge to the fundamental conception of life, the human being, and even the anthropological structure of society, like parental relationships. Cloning and experiments on stem cells have been sufficiently discussed in recent time.

Even when the potential danger is not clearly identified and it is not clear that a project will be successful, the very idea of transgressing the boundaries of forbidden knowledge seems to generate fear. The archetype of the Tree of Knowledge illustrates the religious ban on acquiring knowledge and, more importantly, releasing the ‘hidden forces’ of Nature. This ban is common to a number of cultural eras: the Greek myth of Prometheus, condemned to have his liver torn to shreds by the eagle of Zeus for having stolen the sacred fire of knowledge from the gods, is also linked to it. However, the Christian West has remained particularly marked by the Biblical story of the fall of Adam, the ancestor and symbol of all humanity. This fall is held to be the result of ‘sin’, the transgression of a major taboo: man attempted to become the rival of his Creator by gaining access to forbidden knowledge. This knowledge bears a curse, and seeking to understand the hidden forces of Nature is sacrilegious – the vain and curious desire of research, called knowledge and science, as denounced by Saint Augustine. The discovery of ‘formidable hidden energies’ in matter, asking only to be released in order to return the world to chaos, simply strengthened in parts of the population, often unconsciously, the feeling that in the 20th century humanity reached the extreme limit of what was permitted. The other strand of Christian tradition, to which the theology of Nature is related, considers that doing science may be part of worshiping God. However, this tradition has not been dominant in the building of popular mental imagery.

Finally, the archetype of the Tree of Knowledge has for 16 to 18 centuries been associated with the very rich and complex mental imagery of alchemy. In this ‘art’, the transformation of matter (for example, so-called base metals) or, on a more subtle and profound level, the individual illumination of the ‘seeker’, was necessarily accompanied by a form of death, according to a psychological process that was studied in detail by Jung (1971). Re-birth, or ‘resurrection’, and death are therefore the two sides of the same process of radical transformation of humans and, by extension, of the world (see the theme of the Apocalypse discussed earlier). Indeed, Soddy and Rutherford had a clear understanding of the very strong link with this historical and psychological background, as they are reported to have explicitly mentioned, at the crucial moment of their discovery, the ‘alchemical’ nature of the transmutation of elements, at the risk of being excluded from the scientific community by using this term (Weart 1988). Around 1930, however, Rutherford did assume this responsibility by publishing a book on atomic physics aimed at a lay audience, called The Newer Alchemist.

4. What can be done about nanosciences?

4.1 Nanoscience and fear

Is the emerging fear of nanoscience and nanotechnology justified? Is it a cause for concern? Is there a controversy that could threaten research and applications? How should we analyze this? What can we do?

Based on experience, the initial response of scientists, engineers, and large companies when their activities are called into doubt or simply questioned tends to be unsatisfactory and ineffective. Calling the arguments of demonstrators irrational and their position illegitimate, claiming that informing or educating the public would be enough to allay doubts and calm fears, or that it is all a plot, will never give a balanced understanding of the situation. Moreover, this type of approach is likely to lead to a standoff situation from which nothing positive can emerge. Sociologists point out that those involved in a debate always believe they have good reasons for their actions, and that their logic and ‘world view’ – even when unscientific – have a fundamental legitimacy. Such an attitude accepts the existence of more than one rationality in society.

Moreover, scientific discourse can be perceived as contradictory; Jean-Pierre Dupuy (2004) speaks of the ‘double language’ of the scientific community. Growing media interest in the results of science and technology too often leads specialists to claim that such and such a development is a true revolution, paradigm shift, or a major disruptive technology. After all, decision-makers need to be persuaded to finance research in a context of increased financial constraints. However, as soon as fears emerge among the public, the same people deliver a toned-down version of events in an attempt to be reassuring: actually, everything is under control, the techniques are perfectly mastered, Nature has been doing that forever, etc. Discourse on nanoscience and nanotechnology does not break with this pattern.

To better grasp the emerging constraints linked to nanoscience and nanotechnology, it is necessary to understand that these take place in a much broader context of long-term changes in society. Over and above the specific characteristics of the field to which they apply (the ‘nanoworld’), these fears are only one of several elements of what is undoubtedly a profound change in society’s relationship to science and technology.

Beyond this general analysis of the evolution of our societies, what can we propose to improve the way we manage the difficulties resulting from scientific and technical progress in the field of nanoscience and nanotechnology? In order to try to provide an answer to this question, a first step is to learn from other debates (nuclear, GMO’s, etc.) in which a lack of understanding and absence of dialogue between the various parties involved (experts, public, associations) meant that these questions could not be dealt with in an optimal way. A second step is to look at the work of sociologists of science and technology for concepts, tools, and methods that will ensure that the debate is constructive, while respecting the position of all parties involved. As shown in Section 2, there is a wide variety of questions. Having identified three categories of problems that are linked to the fundamental themes of fear, we are now able to address one by one the difficulties with which we might be confronted.

Figure 1: The three corner of the triangle represent the basic fears as discussed in Section 4. The rectangles on the corners represent the position of fears resulting from various new technologies (nanotechnologies including their convergence with other disciplines) regardless of how realistic they are. Some examples of already existing or past issues are included.

 

4.2 The loss of control: New products

Several issues arising from nanotechnology belong to this category. The short term ones consist in avoiding losing control when a product is introduced to the market, particularly if it is dispersed into the environment, food chain, etc. in a manner that it is difficult or impossible to reverse. A typical example is the introduction of new materials as discussed in Section 2.3. Some of the new materials could release harmful nanoparticles in the environment. This point raises increasing concerns since the production of various nanostructured materials and nanoparticles is expected to rise drastically, sometimes from almost zero, like nanowires and carbon nanotubes, and some of them could be aimed at a mass market. Another case could be ‘processed’ food. A close issue is the GMOs, whose opponents seem to fear, firstly, that GMOs can in certain cases be harmful to health, which so far has no scientific foundation in the discussed cases, and secondly, that they modify the genomes of natural species when disseminated, which does seem to be unavoidable and irreversible. What would the application of a principle of reasoned precaution involve in such situations?[17]

There are four important elements to be considered.

  • Firstly, it would be useful to establish mechanisms for approving the marketing of products that pose potential risks. A reasonable balance would have to be found between the dynamic forces of innovation, which must continue to be encouraged, and protection of the population and the environment. Numerous standards and regulations exist, one of the best known is the approval process for medicines. The questions that need to be answered are: (1) Is the existing mechanism sufficient? (2) What processes need to be established to regulate the release of new products? These questions, which have already been raised with regard to nanomaterials, are far from trivial, as we have shown above. However, they are urgent because innovations are numerous and varied and the products are sometimes hidden.
  • Moreover, a monitoring and alert mechanism, flexible but effective, could also be introduced, taking its cue from the medicine surveillance network for drugs.
  • Callon’s suggestion is to establish ‘hybrid forums’, major deliberating mechanisms to manage controversies over scientific and technological innovations (Callon et al. 2001). Such spaces for debate and interaction between a wide range of parties, including scientists, industrial corporations, engineers, institutions, associations, and the public, must have clear rules setting out, in particular, how the work of the forum relates to the real decision-making process. While few of the decisions would simply literally reproduce the conclusions adopted by the forum – even if they are the result of a true consensus – these conclusions must be taken into account in the decision-making process, in a manner that is transparent from the outset. The hybrid forum, according to Callon, must be a space where those taking part can explore options and learn together, a process in which the identity of participants may change or be built up over time. Popular knowledge would not be discredited and considered illegitimate but, on the contrary, respected and taken into consideration. Finally, the hybrid forum enables setting-up a procedure for managing controversy. Free and open debate between all parties concerned, such that all opinions can be heard and respected, might help avoid rejection as a point of principle, which, for the most part, is due to a lack of prior discussion or a clear perception of the benefit of the innovations. This point could be important because, unlike other innovations discussed later, one can be exposed without having any control on that or any direct benefit, such as when a nanomaterial is introduced in a product to simplify its manufacturing but without any gain for the customer. Nevertheless this process will have limitations. Indeed, a specific feature of nano-products is that there are a huge variety of innovations and it is hard to decide which one should be discussed. For instance, it would be difficult to organize a public debate for any new textile commercialized.
  • Finally, specific research is required to reduce scientific uncertainty as much as possible. Risk is now a normal part of our technoscientific society, as Beck has pointed out, and the society must learn to adopt a questioning attitude towards its own practices and productions, characterized in particular by the fact that research always accompanies action. One question that will be increasingly asked is what type of research should be encouraged to optimize the mechanisms discussed in the two previous points. There are at least three aims: (1) enough background knowledge to define criteria allowing assessing toxicity; (2) a clear view of how the products are degraded in the environment; (3) a better knowledge of the fate and behavior of nanoparticles in the environment.

 

4.3 The loss of control: An experiment that ‘goes wrong’

In addition to experiments that should not be undertaken for ethical reasons, one can consider cases where an experiment could ‘go wrong’. As discussed at the end of Section 2.2, the most realistic scenarios are related to the convergence of nano and biosciences. A good illustration is the synthetic biology projects at the Institute of Alternative Biological Energies, based in Maryland.[18] The target is to create new types of organisms with an artificial genome, such that they are, for instance, capable of manufacturing hydrogen or isolating carbon dioxide. The team’s idea is to start from what already exists and carry out modifications, so it is strictly speaking not a synthetic bacterium. It is not known if, and even less how, it would be possible to create a living cell from its components which, placed together, do not assemble themselves spontaneously to create a living bacterium. If the project successfully creates ‘efficient’ bacteria, masses in the order of the worldwide CO2 emission would have to be produced and released in the environment, i.e. billions of tons, since a bacterium could absorb a carbon mass comparable to its own weight. Tests of samples could also lead to the dissemination and possible fast expansion of the new species. There are similar fears, particularly in France about GMO’s open field experiments. Uncontrolled military experiments, terrorist use, and long-term grey goo scenarios belong to the same category.

The issue is how to manage an efficient regulation system that could authorize or forbid experiments or projects that could be risky. First, some limitations must be taken into account.

  • It is impossible to define a ‘dangerous zone’ within the realm of research topics. For instance there is no ‘grey goo development program’ but plenty of experiments aiming at a better understanding and control of assembly, information processing, and chemistry at the molecular level, most of which aim at the design of better products, better drugs, etc. In addition many of the risky ideas may come from unexpected convergence of innocuous ideas.
  • Research is globalized. How can one stop a research program – supposing that is fully justified – if research continues elsewhere on the planet? The intense competition between nations and multinational corporations in the military and economic field makes it vain to hope to be able to stop research from which certain parties expect decisive advantages in the global competition for power and domination.

An efficient control system should meet the three following conditions.

  • There is a need to invent and implement social and/or institutional mechanisms to control research, while avoiding any drift towards ‘obscurantism’.[19] The organization of research should set the responsibility of various actors. Governments, funding agency, and scientists must consider the long-term consequences of their research. Bodies such as ethics committees and foresight groups are likely to provide a valuable input. Debates should be organized between supporters of such experiments, which will often tend to underestimate, or even ignore the negative effects of their own work, and a panel of scientists with different opinions. The way in which research is organized and financed should provide a first check on this, since the investor is in principle required to make a judgment. However, the trend in most modern research systems is just the opposite: there are numerous supporters who may, in addition, have intricate links, meaning that no-one has the necessary overall view. The responsibilities are diluted and the interests of the various parties may diverge.
  • The general public must also be involved. The goal is to enable constructive debate about matters relating to science and technology, including questioning certain issues, without being identified with one of the supposed enemies of progress. What are therefore the uncertainties? What are the real and perceived risks? What are the advantages and disadvantages, for whom and when? Is there a ‘real’ controversy within the scientific community itself? What interests are at stake? Controversy may never be irreversible, but nor is the technical purpose set in stone. Even when there is not controversy, strictly speaking, or imminent danger, these forums for debate and discussion would provide honest, competent, and argued information. It is essential to separate the real and the imaginary, so that we can concentrate on the ‘real’ questions. For example, from our point of view, the fear of nanotechnology using quantum effects is largely unfounded. This point of view needs to be expressed, justified, and if necessary criticized in an open manner. On the other hand, the question of linking nanotechnologies with complexity or biology is a subject that cannot be easily brushed aside.
  • Only an international consensus, on very precise issues and backed up with monitoring and control mechanisms, could arrive at such a result. In a first step, the experiences of various regulation mechanisms should be shared between actors to find at least a common consciousness of underlying issues; in a second step, progress should be made towards a deeper international integration.

 

4.4 Abuse of discoveries

This difficulty concerns the need to avoid, as far as possible, the abuse of scientific discoveries and technological innovations. Progress in nanotechnologies and their convergence with other techniques may offer various occasions of abuse. This may concern privacy, as discussed in Section 2.4, and the spread of biometric techniques and DNA tests. For instance, last century’s eugenics may return through new (bio)technologies, perhaps in another form that replaces the concept of race with predisposition to a given disease. Finally, one theme that has re-emerged is the development of new arms based on nanotechnologies, for example in the form of micromachines, as a natural extension of biological weapons.

In such a society, to understand innovation we need a new model, such as proposed by Callon et al. (2001). The traditional approach, now superseded and inoperative, required a pre-defined technical object, with a set of features, released into a society that would demonstrate a lower or higher degree of acceptance of the innovation, and would occasionally put up resistance that it would be necessary to overcome. In models proposed by sociologists of science and technology, e.g. by Callon and Latour, the technical object has its technical and social characteristics negotiated and produced simultaneously. It is interesting that this model presents analogies with the debates that are taking place around RFIDs and possible technical characteristics such as ‘killing’ discussed in Section 2.4. The spatial extension of the invention would take place thanks to a complex process of ‘translation’ within a network of participants, among which the innovator must above all find allies, who will then have their own interests and reasons for propagating its use. Once more, the globalization of research considerably limits the real impact of any local or national action. Only an international consensus promises to achieve what has already been accomplished, for instance, for chemical and bacteriological weapons.

Aside from the fact that the notion of ‘abuse’ may be relative, it will always be difficult to arrive at a consensus because diverging interests are likely to be at stake. For example, some parties, such as producers and distributors, will stress the considerable advantages to be gained through the systematic use of RFIDs for managing and tracing certain products, while others, in particular consumers and citizens, may see their use as putting individual liberties at risk. An intrusive technology may result of a trade off between a service and more safety and privacy protection. Examples of these questions are extensive video monitoring coupled with biometry, database of genetic fingerprints, etc. The trade off may be an unstable equilibrium between groups having strongly opposite opinions.

There is no absolute truth in this matter, and both points of view can be defended. In a democracy, only society as a whole is able to identify what is real progress, as far as it is concerned; it is a political question, in the best sense of the term. In this context, the consensus of the majority, which will be expressed in legislation, is forged by dialogue among the participants in the debate. Such negotiation supposes, on the one hand, a role for delegation and mediation – hence procedures for choosing representatives and spokes-persons – and, on the other hand, the role of arbitrator and decision-maker to be played by political leaders. It requires transparent information and decision-making procedures, which are, rightly or wrongly, contested in various technical and scientific fields, such as nuclear energy and GMOs. Moreover, some research or development projects (e.g. rebuilding a pathogenous bacteria to investigate a disease that has disappeared for centuries) clash to such an extent with the shared interests and/or fundamental values of our societies that they are prohibited.

4.4 Transgression

The long-term goal of nanoscience is the understanding of how Nature works at the molecular level. Up to now Nature is still ‘protected’ by the barrier of complexity so that even a deep understanding of each part does not lead to the understanding of the whole. An additional trend is that information technologies are spreading everywhere so that Nature and human beings could be parts of a gigantic information system. That type of evolution drastically affects the relationship between humanity and Nature, as in the following cases.

  • As discussed in Section 2.2, nanoscience could lead to the manipulation of life. Traditional biotechnology is already capable of this, but the new factor, it is imagined, would be a vast increase of human manipulation of living matter in an unprecedented way, that may go as far as the creation of hybrids, monsters, chimeras or other ‘unnatural’ beings, such as in Crichton’s novel Prey.
  • Another question is the limit of humankind. The questions already arises with issues such as stem cells or human cloning that are both related to the control of DNA configuration in a cells. More generally the body could eventually be considered as a complex machine that can be fixed in case of failure and modified or even enhanced. Similarly the impact of understanding and modifying the brain will raise new issues such as the meaning of responsibility and feelings when they are understood in terms of circuitry and ‘wetware’.
  • In the shorter term, the mixing of the information technology and life is a kind of shock. The introduction of external devices in the body, as discussed in Section 2.5, is considered a violation that causes stronger reactions than an external RFID attached to clothes or skin. Other technology such as brain imagery, e.g. neuromarking, or DNA analysis for nonmedical purpose rise similar issues.

Often such research can also bring benefit, for instance, for health, as it is argued for stem cell research. Nevertheless, even if the market is the right regulation system for many new technologies, some cases mentioned above need external regulation. Two important points must be considered.

  • As with today’s medicine and biotechnology, the issues must be addressed by external ethics committees or regulation authorities, if possible before development. Here again, there is a limitation due to the globalization of research. Unlike ‘dangerous experiments’, there is no risk if common rules are adopted worldwide. However, a more permissive country could attract most of the research forbidden elsewhere and take benefit of that in the long term.
  • As already discussed above, public awareness and debates well in advance are required for at least three reasons: (1) It is a useful tool to prepare various arguments that could be taken into account by regulation authorities. (2) The impact of some of the research is so large that science and the public must keep close to avoid a divide. (3) The hype, unconscious declaration, and success of some science fiction movies blur the distinction between reality and fiction. It is important to provide the information required to have a sane opinion.



5. Conclusion

Nanosciences and nanotechnologies are a rapidly growing field that already generates many hopes within the scientific and technological community of future discoveries, developments, and solutions to a number of societal problems. Simultaneously, fears of possible negative and uncontrolled impacts on humans and the environment are also developing steadily. In this paper, we propose a typology to classify these fears, which are shown to be associated with images, metaphors, and symbols deeply rooted in the Western religious tradition. However, we think that it is necessary, and urgent, to discern between the hype, notably due to the media coverage of the field, and reality. Strangely enough, the idea that there might be a problem with nanotechnologies first emerged amongst the community of experts and promoters of this field, at a time when the general public was not even aware of the existence/emergence of a nanoworld. Is it only initially a media phenomenon?

Whatever the answer, we may have the opportunity, perhaps for the first time in the history of science and technology, to consider simultaneously the developments of new scientific knowledge and engineering capabilities with its impact on society and the environment and, thus, to take in time appropriate decisions ‘to keep everything under control’. In a potentially controversial context, political decision-makers have the responsibility, with the active participation of scientists and engineers, to initiate, stimulate, and organize the public debate. Their objective should be to clarify the actual issues at stake, putting aside purely imaginary ones which rather belong to science fiction, as well as to identify methodologies to tackle these issues and to implement regulations, where necessary, to ‘master’ the development of nanotechnologies.

The difficulty of this task stems from the wide variety of (nano)objects, topics, and issues associated with the expressions ‘nanosciences’ and ‘nanotechnologies’. Indeed, nanoparticles, molecular robots, radiofrequency identification devices, etc., raise different questions and call for specific solutions. The possible toxicity of nanoparticles, which may be released massively in the environment, poses a different problem than the wide commercial diffusion of RFIDs, which may endanger the privacy of personal information, even in a democratic society.

In this paper, we make a number of proposals to tackle these difficult issues. We underline the importance of the role assigned to the public and, more generally, to all concerned social actors in any debate about science and technology. Callon’s hybrid forums appear worth considering seriously. Foresight exercises would also be very useful to build scenarios taking into account properly both the likely developments of sciences and technologies and societal needs, expectations, and fears. Before testing them, we do not know if the proposals in fact enable effective management of the controversies that could emerge. The case of nanosciences could in this respect be exemplary, since the concerns and fears that it provokes have been raised even before its actual development. Consequently, those working in this field, in first place the scientists and engineers, have the option of including these legitimate questions in the very core of their research and innovation.


Notes

[1]  The text of Feynman’s speech is on the Caltech website [www.its.caltech.edu/~feynman/plenty.html].

[2]  Joy’s article can be downloaded from [wired.com/wired/archive/8.04/joy.html].

[3]  Rocco 2001, Mnyusiwalla 2003; see for instance the debate on April 9, 2003 at [www.house.gov/science/hearings/full03/index.htm].

[4]  In a liter of cell culture from which insulin is to be made, there may be 10,000 billion protein-assembling ribosomes, each working at the rate of about 10 amino acids a second.

[5]  The website of the company DUST Inc. is [www.dust-inc.com].

[6]  The website of this group is [www.nocards.org].

[7]  The document ‘RFID Position Statement of Consumer Privacy and Civil Liberties Organizations’ is available at [www.privacyrights.org/ar/RFIDposition.htm].

[8]  The workshop website is [www.rfidprivacy.org/agenda.php].

[9]  Available on the website [www.privacyconference2003.org].

[10]  See the articles ‘Family Set to Get Chipped’ in TechTV [www.techtv.com] and ‘Miami journalist gets ‘chipped’’ in Worldnetdaily [www.worldnetdaily.com/news/article.asp?ARTICLE_ID=32286].

[11]  See the website of the Baja club (‘zona VIP’) [www.bajabeach.es] and [www.informationweek.com/showArticle.jhtml?articleID=23901004].

[12]  See the website [www.digitalangelcorp.com].

[13]  The website of the company is [www.solusat.com.mx].

[14]  See [www.adsx.com/news/2003/112103.html].

[15]  See the IPCC website [www.ipcc.ch].

[16]  See for example [nuclear.ucdavis.edu/NPG_rhic.html].

[17]  We understand this principle as calling for prudent action – not immobilization – when there is strong scientific uncertainty and possible irreversible and unacceptable consequences.

[18]  See the website of the Institute [www.bioenergyalts.org].

[19]  By this we refer only to the systematic refusal without arguments of all research, and not the act of contesting and challenging certain scientific and technical activities. The latter does not, in a democracy, constitute a subversive action; it rather facilitates a legitimate debate within society about innovations that society will have to manage, and the disadvantages of which it could possibly have to suffer.


References

Beck, U.: 1992, Risk Society: Towards a New Modernity, Sage Publications, London.

Boia, L.: 1989, La fin du monde – une histoire sans fin, La Découverte, Paris.

Callon, M.; Lascoumes, P. & Barthe, Y.: 2001, Agir dans un monde incertain – Essai sur la démocratie technique, Le Seuil, Paris.

Cohn, N.: 1983, The Pursuit of the Millenium, Oxford University Press, Oxford.

Crichton, M.: 2002, Prey, Avon Publisher, New York.

CTEKS: 2004, K. Bruland, A. Nordmann, J.. Altmann, D. Andler, T. Bernold, W. Bibel, J.-P. Dupuy, D. Fitzmaurice, E. Fontela, T. Gaudin, R. Kneucker, G. Küppers, B. Masini: Converging Technologies – Shaping the Future of European Societies [europa.eu.int/comm/research/conferences/2004/ntw/index_en.html].

Drexler, E.: 1986, Engines of Creation. The Coming Era of Nanotechnology, Anchor Books, New-York [www.foresight.org/EOC/].

Drexler, E. & Phoenix, C.: 2004, ‘Safe exponential manufacturing’, Nanotechnology, 15, 869-872.

Dupuy, J.-P.: 2004, ‘Pour une évaluation normative du programme nanotechnologique’, Annales des Mines, (February), 27-32.

Easterbrook, G.: 2003, The Progress paradox: How life gets better while people feel worse, Random House, New-York.

EHS: 2004, Nanoparticles: An Occupational Hygiene Review, Research report 274 by the Institute of Occupational Medicine for the Health and Safety Executive [www.hse.gov.uk/research/rrpdf/rr274.pdf].

ETC: 2003a, From genomes to atoms the big down’ [www.etcgroup.org/documents/thebigdown.pdf].

ETC: 2003b, Green Goo: Nanobiotechnology Comes Alive! [www.etcgroup.org/documents/comm_greengoo77.pdf].

Farouki, N.: 2001, Les progrès de la peur, Le Pommier, Paris.

Highfield, R.: 2003, ‘Prince asks scientists to look into ‘grey goo’’, News Telegraph, June 5.

Greenpeace: 2003, Future Technologies, Today’s Choices [www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf].

Jonas, H.: 1985, The imperative of responsability – In search of an ethics for the technological age, Chicago University Press, Chicago.

Jung, C.G.: 1971, Psychology and Alchemy, Princeton University Press, Princeton.

Kurzweil, R.: 1999, The Age of Spiritual Machines, Penguin Putnam, London.

Lucas, C.: 2003, ‘We must not be blinded by science’, The Guardian (June 12).

Moravec, H.: 1999, Robots. Mere Machine to Transcendent Mind, Oxford UP, Oxford.

Mnyusiwalla, A.; Daar, A.S. & Singer, P.A.: 2003, ‘Mind the Gap: Science and Ethics in Nanotechnology’, Nanotechnology, 14, 9-13.

Nanoforum: 2003, Nanotechnology and its Implication for the Health of the EU Citizen [www2.cordis.lu/nanotechnology/src/whatsnew.htm].

NSF: 2002, Converging Technologies for Improving Human Performance: Nanotechnology, biotechnology, information technology and cognitive science, ed. by M.C. Rocco & W.S. Bainbridge, National Science Foundation, Arlington, VA [www.wtec.org/ConvergingTechnologies/].

Oberdörster, G.; Sharp, Z.; Atudorei, V.; Elder, A.; Gelein, R.; Lunts, A.; Kreyling, W. & Cox, C.: 2002, ‘Extrapulmonary translocation of ultrafine carbon particle following whole body inhalation exposure of rats’, Journal of Toxicology and Environmental Health, Part A, 65, 1531-1543.

Prince Charles: 2004, ‘HRH The prince of Wales: Menace in the minutiae – New nanotechnology has potential dangers as well as benefits’, The Independent (July 11) [comment.independent.co.uk/commentators/story.jsp?story=539977].

Radford, T.: 2003 ‘Brave new world or miniature menance? Why Charles fears grey goo nightmare’, Guardian online (April 29).

Rees, M.: 2003, Our Final Century, Will the Human Race Survive the 21st Century?, William Heinemann, London.

Rifkin, J. & Howard, T.: 1977, Who Should Play God, Dell, New-York.

Rocco, M. & Bainbridge, W.: 2001, Societal Implication of Nanoscience and Nanotechnology, Kluwer, Dordrecht.

Royal Society: 2004, Nanoscience and nanotechnologies: opportunities and uncertainties’ [www.nanotec.org.uk/finalReport.htm].

Sanco, European Commission: 2004, Nanotechnologies: a preliminary risk analysis on the basis of a workshop organized in Brussels on 1–2 March, 2004, Health and Consumer Protection Directorate General of the European Commission [europa.eu.int/comm/health/ph_risk/documents/ev_20040301_en.pdf].

Service, R.J.: 2003, ‘Nanomaterials Show Signs of Toxicity’, Science, 300 (5617), 243.

Smalley, R.: 2001, ‘Of Chemistry, Love and Nanobots’, Scientific American (September).

Smalley, R.: 2003, ‘Nanotechnology – Drexler and Smalley make the case for and against molecular assemblers’, Chemical & Engineering News, 81(48), 37-42.

Swiss Re: 2004, Nanotechnology. Small matter, many unknowns [http://www.swissre.com].

Van Doren, C.: 1967, The Idea of Progress, Frederic A. Praeger, New-York.

Wagar, W.: 1969, ‘Modern views of the origins of the idea of Progress’, in: W. Wagar (ed.), The idea of Progress since the Renaissance, Wiley, New-York.

Weart, S.: 1988, Nuclear Fear: A History of Images’, Harvard UP, Cambridge, MA.

Whitesides, G.M: 2001, ‘The Once and Future Nanomachine: Biology Outmatches Futurists – Most Elaborate Fantasies for Molecular Robots’, Scientific American, (September).


Louis Laurent and Jean-Claude Petit:
Commissariat à l’Energie Atomique, DSM/DRECAM, CEA/Saclay, 91191 Gif Sur Yvette Cedex, France; laurent@drecam.cea.fr, jcpetit@cea.fr

Copyright © 2005 by HYLE and Louis Laurent & Jean-Claude Petit