It does not cover:
* material related to other definitions of the Singularity including technological acceleration and the superintelligence event horizon (see Yudkowsky, "Three Major Schools"),
* predictive futurism,
* existential risk,
* and the many relevant fields such as decision theory, cognitive neuroscience, and ethics.
There is an emphasis on academic work. However, the field is new and small, and much high-quality writing appears only in informal forums. Some of the best of this is listed below.
I'd like to make the bibliography comprehensive within its narrow field, so please add more items in the comments to help in future revisions
- Michael Anissimov, "Consolidation of Links on Friendly AI," Accelerating Future, 2009.
- Stuart Armstrong, "Chaining God: A qualitative approach to AI, trust and moral systems," New European Century, 2007.
- Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence," , I. Smit et al., (Eds.), Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, 2003.
- Stephan Vladimir Bugaj and Ben Goertzel, "Five Ethical Imperatives and their Implications for Human-AGI Interaction," Dynamical Psychology, 2007.
- David Chalmers, "The Singularity: A philosophical analysis," 2010.
- Joshua Fox and Carl Shulman, "Super-intelligence does not imply benevolence," European Conference on Computing and Philosophy, 2008.
- Tim Freeman, "Using Compassion and Respect to Motivate an Artificial Intelligence," 2007-08.
- Ben Goertzel, "Thoughts on AI Morality," Dynamical Psychology, 2002.
- Ben Goertzel, "The All-Seeing (A)I," Dynamical Psychology, 2004.
- Ben Goertzel, "Encouraging a Positive Transcension" Dynamical Psychology, 2004.
- Nick Hay, "The Stamp Collecting Device", SIAI Blog.
- J. Storrs Hall, "Engineering Utopia", in Pei Wang, Benjamin Goertzel and Stan Franklin (Eds), Proceedings of the First AGI Conference, 2008.
- Bill Hibbard, "Critique of the SIAI Guidelines on Friendly AI," 2003.
- Bill Hibbard, "Critique of the SIAI Collective Volition Theory," 2005.
- Shane Legg, "Friendly AI is bunk," 2006.
- Tom McCabe, "General Summary of FAI Theory," Life, the Universe, and Everything, 2007.
- Singularity Institute for Artificial Intelligence, "Reducing long-term catastrophic risks from artificial intelligence," 2010.
- Steve Omohundro, "The Nature of Self-Improving Artificial Intelligence," Singularity Summit (textual version here), 2007.
- Steve Omohundro, "The Basic AI Drives", in Pei Wang, Benjamin Goertzel and Stan Franklin (Eds), Proceedings of the First AGI Conference, 2008.
- Steve Omohundro, "AI and the Future of Human Morality," Silicon Valley World Transhumanist Association Meetup, 2008.
- Eliezer Yudkowsky, "The Three Major Singularity Schools," Singularity Summit (textual version here), 2007.
- Eliezer Yudkowsky, "Artificial intelligence as a positive and negative factor in global risk," in N. Bostrom, & M. M. Ćirković (Eds.), Global Catastrophic Risks
- Eliezer Yudkowsky has written many of the important works in this field, mostly in non-academic contexts. See the items on the SIAI site as linked in Anissimov's bibliography. He has written dozens of important articles on the Less Wrong blog (formerly Overcoming Bias). Some of the most relevant ones are linked from the Paperclip Maximizer page.
Joshua Fox, http://www.joshuafox.com
Also: I.J. Good, "Speculations Concerning the First Ultraintelligent Machine," Advances in Computers, 6, 1965. Available at http://web.archive.org/web/20010527181244/http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html
ReplyDeleteAlso:
ReplyDeleteKaj Sotala: "Evolved altruism, ethical complexity, anthropomorphic trust: three factors misleading estimates of the safety of artificial general intelligence". Proceedings of the 7th European Conference on Computing and Philosophy (ECAP 2009). 2009.
Kaj Sotala: From Mostly Harmless to Civilization-Threatening: Pathways to Dangerous Artificial Intelligences. Proceedings of the VIII European Conference on Computing and Philosophy. Edited by Klaus Mainzer (Munich: Verlag Dr. Hut, 2010). Pp. 443-450. 2010.
Waser, M. 2008. Discovering The Foundations Of A Universal System Of Ethics As A Road To Safe Artificial Intelligence. In AAAI Technical Report FS-08-04. Menlo Park, CA: AAAI Press.
ReplyDeleteHall, JS & Waser, M. 2009. Ethics for Recursively Self-Improving Machines. Presentation at the 2nd AGI Conference.
ReplyDeleteWaser, M. 2009. A Safe Ethical System for Intelligent Machines. In AAAI Technical Report FS-09-01. Menlo Park, CA: AAAI Press.
ReplyDeleteWaser, M. 2010. Designing a Safe Motivational System for Intelligent Machines. In Proceedings of the Third AGI Conference.
ReplyDeleteWaser, M. 2010. Why a Super-Intelligent God *WON’T* “Crush Us Like A Bug”. Presentation at the Third AGI Conference.
ReplyDeleteWaser, M. 2010. Deriving a Safe Ethical Architecture for Intelligent Machines. In Proceedings of the VIII European Conference on Computing and Philosophy. (Powerpoint)
ReplyDeleteWaser, M. 2010. A Game-Theoretically Optimal Basis for Safe and Ethical Intelligence. In Biologically Inspired Cognitive Architectures 2010: Proceedings of the First Annual Meeting of the BICA Society. (Powerpoint)
ReplyDeleteThanks for compiling this! A few housekeeping notes: the link to Shane Legg's 'Friendly AI is Bunk' is broken, and the link to Joshua Fox and Carl Shulman's 'Superintelligence Does Not Imply Benevolence' is split in two, the first half of which links to a nonsense URL.
ReplyDeleteFor those interested, I've developed a more thorough bibliography of Friendly AI, which I'll keep updated, here:
ReplyDeletehttp://commonsenseatheism.com/?p=14047