Looking into the Black Box: Holding Intelligent Agents Accountable
Srivats Shankar*
Volume 10 Issue 3 (2017)
Since the 1950s, mathematicians and scientists have theorised the concept of artificial intelligence and tried to understand the relationship it would have with humans. Although, originally viewed as the creation of human-esque machines, modern artificial intelligence tends to be applied to situations involving complex information and intelligent application of reasoning. Taking many different forms, the information technology industry has begun to actively invest in the creation of artificial intelligence systems at a never-seen-before scale. These systems have already begun to appear in common digital technology available today. The complexity of these systems offers both benefits and dangers to the community at large. A matter of particular concern is the obfuscated nature in which these systems work, creating a ‘black box’ over the internal functioning of the system, which, in extreme circumstances, could lead to a denial of legal and human rights. Currently, most artificial intelligence systems can be characterised as intelligent agents, as they take into consideration past knowledge, goals, values, and environmental observations to evaluate the situation and take actions appropriately. The conception of artificial intelligence systems as intelligent agents allows for a focused understanding of this novel legal problem, based upon which evaluations relating to accountability can be better framed. In this paper, I will focus on why it is important to hold artificial intelligence accountable and the most significant obstacles that prevent this goal from being achieved.