Nick Bostrom: The AI pioneers for the most part did not countenance the possibility that their enterprise might involve risk.

3/09/2014 - 21:19
[Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, Oxford Press, 2014, p. 5]
 
The AI pioneers for the most part did not countenance the possibility that their enterprise might involve risk. (11)  They gave no lip service -- let alone serious thought -- to any safety concern or ethical qualm related to the creation of artificial minds and potential computer overlords, a lacuna that astonishes even against the backdrop of the era's not-so-impressive standards of critical technology assessment. (12)  We must hope that by the time the enterprise eventually does become feasible, we will have gained not only the technological proficiency to set off an intelligence explosion but also the higher level of mastery that may be necessary to make detonation survivable . . .
 
(11) One exception is Norbert Wiener, who did have some qualms about the possible consequences.  He wrote in 1960: "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colourful imitation of it" (Wiener 1960).  Ed Fredkin spoke about his worries about superintelligent AI in an interview described by McCoruck (1979).  By 1970, Good himself writes about the risks, and even calls for the creation of an association to deal with dangers (Good 1970); see also his later article [Good 1982] where he foreshadows some of the ideas of "indirect normativity" that we discuss in Chapter 13.  By 1984, Marvin Minsky was also writing about many of the key worries (Minsky 1984).
 
(12) Cf Yudkowsky (2008a).  On the importance of assessing the ethical implications of potentially dangerous future technologies before they become feasible, see Roache (2008).
 
Wiener, Norbert. 1960.  "Some Moral and Technical Consequences of Automation."  Science 131 (3410): 1355-8.
 
McCoruck, Pamela.  1979.  Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, San Francisco, W. H. Freeman.
 
Good, Irving John.  1970.  "Some Future Social Repercussions of Computers."  International Journal of Environmental Studies 1 (1-4): 67-79.
 
Good, Irving John.  1982.  "Ethical Machines." In Intelligent Systems: Practice and Perspective, edited by J.E. Hayes, Donald Michie, and Y.-H. Pao, 555-60.  Machine Intelligence 10, Chichester: Ellis Horwood.
 
Minsky, Marvin.  1984.  "Afterword to Vernor Vinge's novel, 'True Names.'"  Unpublished manuscript, Octovber 1.  Retrieved December 31, 2012.  Available at http://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html.
 
Yudkowsky, Eliezer.  2008a.  "Artificial Intelligence as a Positive and Negative Factor in Global Risk."  In Global Catastrophic Risks, edited by Nick Bostrom and Milan M Cirkovic, 308-45.  New York: Oxford University Press.
 
Roache, Rebecca.  2008.  "Ethics, Speculation and Values."  NanoEthics 2 (3): 317-27.
 
via Mark Stahlman (Centre for Study of Digital Life)