This master's thesis examines the ethical challenges that arise throughout the lifecycle of large language models (LLMs), from development to their integration into society. The central analytical framework is based on the normative AI4People model, which outlines five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. Through qualitative content analysis, the study reviews publicly available documents from three LLM providers – OpenAI, Google DeepMind, and Mistral AI. The findings show that while providers do reference these ethical principles in their documents, the approaches to implementing them vary and are often only partially developed. The study highlights the gap between stated ethical values and their practical realization, offering a normative foundation for critically evaluating current approaches to ethical responsibility among providers. The thesis contributes to the development of the field of AI ethics by demonstrating how structured ethical frameworks can be applied to the evaluation of concrete cases of LLM development and governance, while also offering a basis for further discussion on normative criteria and provider responsibility in technologically complex environments.
|