In this PhD thesis we explore the problem of transparency, which in the context of artificial intelligence (AI) refers to a set of challenges arising from the fact that complex algorithmic machine learning systems do not operate transparently, which is to say that humans cannot understand the reasons behind the decisions made by such systems. The thesis is divided into three chapters. In the first chapter, we provide the broader context in which the problem of transparency becomes part of AI and explore why the problem arises and whether we can reasonably expect all AI systems to ever operate transparently. In the second chapter, we develop a genealogy of the concept of ‚transparency‘ through analysing its moral connotation and evaluating the ideal of transparency that characterises contemporary society. In this chapter, we also present in more detail and critically evaluate three important documents that provide ethical guidelines for the use of AI and propose legal regulation in the field. In the third chapter, we draw five key claims from the results of the first two chapters and provide a criterion for assessing the possibility of ethical use of unexplainable AI systems.
|