Autonomous vehicles and liability

0
239

(from Alphonsian Academy blog)

Recently, some employees of the electric car company Tesla have accused the founder and CEO (CEO), Elon Musk[1], of having subordinated the safety of their cars to other commercial criteria and of not having sufficiently warned of the limitations and risks of their autonomous driving system. The complainants claim that this system has been announced in an equivocal way as if it were capable of completely dispensing with the human driver (level 5).[2]

Towards fully autonomous driving

In its advertising, Tesla explicitly acknowledges the limitations of the autonomous driving system, probably for legal reasons, and warns that the driver must always be prepared to take control of the vehicle. However, at the level of perception, try to make users believe in the security of the system[3]. In various ways, it tries to convince them that these vehicles are capable of operating autonomously and reliably, even in complex and unpredictable situations. That seems to indicate, for example, the fact that, recently, Tesla has introduced games for the driver to entertain himself while the vehicle is responsible for taking him to his destination[4]. In fact, videos circulating on the Internet show drivers of these vehicles apparently asleep while travelling on Californian highways.[5]

Tesla has not been the first to get cars with autonomous driving. As early as 1930, General Motors manufactured a car that needed roads with fixed transmitters. The vehicle was guided by capturing, via radio, the electromagnetic fields generated by these transmitters. In 1977, the Japanese company Tsukuba introduced a vehicle that did not need specially adapted roads, as it was guided autonomously using two cameras. Currently, the demand for these vehicles is growing by more than 16%  per year worldwide, although opposition to them is also growing. Many automotive companies plan to put autonomous vehicles on sale in the coming decades. [6]

The difficult programming

These vehicles need to be programmed to make decisions in complex circumstances that demand value judgments. [7]This programming is difficult since there are many possible situations of moral uncertainty. Who should decide the type of ethics (deontological, utilitarian, virtue ethics, etc.) that will guide the decisions of the vehicle and its possible learning? [8] These decisions cannot be left only to the programmer or the manufacturer, as there are many other possible involved, in an active and passive mode, for example, drivers, users, passengers and pedestrians (Robot-2,35; 46).

The dilemmas of the tram[9] and the hull[10] are good examples. If cases of this type always require careful discernment when a human person makes the decision, it is even more difficult to develop an algorithm that can guide these autonomous machines in decision-making. What for the driver can be a circumstantial decision, subject to the improvisation of the moment, changes direction by inserting it with a long time in advance in an algorithm, since the programmed decisions will be premeditated and, therefore, subject to a greater burden of responsibility.

When manufacturing and programming a vehicle, the subsequent involvement of the person who will drive it must be foreseen so that he can exercise his moral autonomy. Specifically, it should be made easier for the driver to choose the type of ethics on which the vehicle will make its decisions. Likewise, the other possible involved should be able to participate. (Robot-2, 23). For example, it has been suggested that, in order to achieve this objective, it would be convenient for the buyer to respond to a previous questionnaire and, based on their answers, the vehicle could be reprogrammed. 

In any case, the complexity of the situations that the vehicle will have to face causes numerous uncertainties. In the current state of technology, still very improvable, it is considered essential that the programming of these vehicles determines how and when the algorithm must transfer to the human driver the final decision making in complex cases. 

Accidents and Liability

These vehicles are presented as safer, but, so far, the number of accidents in which they have been involved is not lower than that of conventional ones, although the consequences have been less serious. [11]

It is difficult to delimit the responsibilities in case of accidents, deaths, damages[12]. Typically, the vehicle owner is primarily responsible for the damage caused, just as when a dog bites a passerby, it is its owner who must assume responsibility (Robot-2,68). However, the issue is complex. It must be analyzed if it has been due to the algorithms that the vehicle has incorporated, manufacturing defects or failures in the alert system. Currently, reports of accidents caused by failures in the autonomous driving system are increasing.[13]

These are just a few of the many ethical issues that self-driving cars raise today. In this and other areas, a broad and serene debate is needed on the many technologies related to robotics and artificial intelligence that are increasingly present in our society.

Fr. Martín Carbajo Nuñez, OFM


[1] Cf. https://www.tesla.com/elon-musk. CEO (Chief Executive Officer).

[2] Level 5 refers to full autonomy, which would make driver intervention unnecessary (Full Self Driving, F.S.D.). Cf. https://www.km77.com/reportajes/varios/conduccion-autonoma-niveles.

[3] Elon Musk Had Declared in 2016: «The basic news is that all Tesla vehicles leaving the factory have all the hardware necessary for Level 5 autonomy». Cited in Metz Cade – Boudette Neal E., «Inside Tesla as Elon Musk Pushed an Unflinching Vision for Self-Driving Cars», in The New York Times, [NOW], (6.12.2021); Internet: https://www.nytimes.com/2021/12/06/technology/tesla-autopilot-elon-musk.html.

[4] Boudette Neal E., «A new Tesla safety concern: Drivers can play video games in moving cars», in NOW (12.12.2021);Internet: https://www.nytimes.com/2021/12/07/business/tesla-video-game-driving.html

[5] Baker Peter C., «I think this guy is, like, passed out in his Tesla», in NOW (27.11.2019); Internet: https://www.nytimes.com/2019/11/27/magazine/tesla-autopilot-sleeping.html?searchResultPosition=7

[6] In 2019, the company Uber was already offering services with self-driving cars in the US city of Pittsburgh. Cf. BenantiPaolo, «Artificial intelligences, robots, Was-engineering and cyborgs: new theological challenges?”, in Concilium 55/3 (2019) 46-61, here 48.

[7] «Algorithms that automate complex ethical decision making». Millar Jason, «Ethics settings for autonomous vehicles», in Lin Patrick – Jenkins Ryan – Abney Keith, (th.) Robot ethics 2.0. From autonomous cars to artificial intelligence, [Robot-2],Oxford Univ. Press, Oxford 2017, 22.

[8] Loh Janina, “Old responsibilities or new responsibilities? The pros and cons of a transformation of responsibility’ in Concilium 55/3 (2019) 111-121, here 118.

[9] This dilemma was developed by philosopher Philippa Foot in 1967 and adapted by Judith Jarvis Thomson in 1985. Cf. https://theconversation.com/the-trolley-dilemma-would-you-kill-one-person-to-save-five-57111

[10] Cf. https://www.youtube.com/watch?v=ixIoDYVfKA0&t=8s

[11] https://gerberinjurylaw.com/autonomous-vehicle-statistics/

[12] Millar Jason, «Ethics settings for autonomous vehicles», in Robot-2, 20-34, here 22.

[13] Cf. Robot-2, 66; Boudette Neal E., «Tesla says autopilot makes its cars safer. Crash victims say it kills», in NOW (5.07.2021); Internet: https://www.nytimes.com/2021/07/05/business/tesla-autopilot-lawsuits-safety.html?searchResultPosition=2