This undergraduate thesis occupies itself with the issue of responsibility for self-driving vehicles. First, I introduce the SAE levels of vehicle automation which determine how autonomous a vehicle is. I then describe the dilemmas we are faced with when dealing with vehicles that are not fully autonomous - the driver still has to take over control of these vehicles in specific situations. However, the emphasis of the thesis is on the ethical issues brought about by fully autonomous or self-driving vehicles. These vehicles have built-in artificially intelligent self-learning algorithms, whose all possible outcomes can no longer be predicted, as was possible with deterministic algorithms. As artificial intelligence cannot yet be held accountable for itself, this raises the question of who is responsible for traffic accidents caused by autonomous vehicles. Artificial intelligence should be told how to act in ethically sensitive situations. These situations are examined by the trolley problem, which I describe. Then I try to find out which solution to the problem would be most appropriate for use in self-driving vehicles. I do not find the answer, because every solution has its own problems. Therefore, I explore the question of who is to make the sensitive decision on how self-driving vehicles should act in cases of inevitable accidents and be responsible for it. Finally, I conclude that autonomous vehicles – even if no solution to the trolley problem is found – reduce the number of traffic accidents overall and bring so many other benefits, that it makes sense to get them on the road as soon as possible.
|