This thesis examines the legal approach to the regulation of prohibited and high-risk AI systems that was introduced by the new AI Act (Regulation 2024/1689). Its main objective is to determine to what extent the new Act reflects the real risks posed by these systems and whether or not it regulates them accordingly. To this end, a simple definition of AI systems is first given, after which a selection of a few more harmful risks – such as manipulation and the commission of crimes – is exemplified using real-life use-cases. Afterwards, the evolution of the AI Act’s current legislative approach is then briefly summarised, followed by a more detailed look of Articles 5 and 6 of the Regulation, which constitutes the core part of the thesis. Accordingly, the regulation of prohibited practices is first described, with particular attention being paid to crime prediction systems and remote biometric identification systems. Then, the regulation of high-risk AI systems is presented, where systems for the prevention, detection and investigation of crime and systems for the use in criminal justice are scrutinised. Both chapters also highlight some of the shortcomings of the current legal regime, whereby in some cases a different approach is suggested. Finally, two further chapters are discussed separately, namely the regulation of AI systems for military, defence and national security purposes, and a possible approach to the regulation of the widespread practice of so-called “ghost work”, where Union fundamental values may be infringed even before a system is placed on the market or put into use.
|