AI Ethics - Superintelligent agents
An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe (i.e., an event comparable in value to that of human extinction). Among those concerned about existential risk related to Artificial Intelligence (AI), it is common to assume that AI will not only be very intelligent, but also be a general agent (i.e., an agent capable of action in many different contexts). This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent.
In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs general desires, but general desires cannot sui generis be derived from non-general desires. Thus, even though general agency in AI could in principle be created, it cannot be arrived at spontaneously by an endogenous process. In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, is not plausible.
About Karim Jebari
Karim Jebari is a researcher at the Institute for Futures Studies. He defended his doctoral thesis in December 2014 at the Royal Institute of Technology (KTH). The thesis is about applied ethics and in particular how we should relate to the risks and opportunities of technological innovation. Photo: Sara Moritz
Place: Lecture Hall Palmstedt, university building, Chalmersplatsen 2, Campus Johanneberg
Welcome! (no registration required)