With the development of artificial intelligence, which is increasingly present in the everyday lives of a growing number of individuals and, alongside numerous benefits, also poses serious risks, awareness of the need for human control is rising. With the Artificial Intelligence Act, which came into force in August 2024, the European Union (EU) established the first directly binding legal framework in the field of artificial intelligence, requiring Member States to introduce regulatory sandboxes as controlled environments for testing innovative artificial intelligence solutions. The effectiveness of these sandboxes in EU Member States will undoubtedly depend on the implementing acts of the European Commission, as well as on the detailed design of each sandbox and the specific methods of testing applied. This article examines the significance and role of such sandboxes and the advisability of fragmenting them at the Member State level. It argues that within the EU, it would likely be more effective to establish a common sandbox or at least several shared sandboxes dedicated to specific sectors at the EU level. For an effective global solution, it would be sensible to create a regulatory sandbox for artificial intelligence at the international level, for example within the United Nations, since artificial intelligence development and application transcend national borders. However, this appears difficult to achieve in the current context of international relations. Therefore, consideration should be given to establishing a global online platform for artificial intelligence, aimed at developing guidelines and recommendations, testing artificial intelligence systems, and providing education on their use.
|