This master’s thesis examines the legal challenges posed by the use of artificial intelligence in (self-)triage processes in healthcare. Technological advances offer opportunities to ease the burden on healthcare systems, but at the same time raise concerns about the unreliability, opacity, and bias of AI systems. Legal scholarship is thus faced with the question of how existing liability regimes address the risks created by such technologies.
The research analyses the European regulatory framework, particularly the Artificial Intelligence Act and the revised Product Liability Directive, as well as the Slovenian rules on medical liability. It finds that the European approach focuses primarily on risk management and setting minimum safety standards, rather than harmonising substantive civil-law rules. As a result, liability for errors of AI systems in medicine is still largely assessed under fault-based principles, which in practice often make proof difficult. The possibilities for strict (objective) liability remain limited and are not adapted to the specific features of self-learning systems.
|