Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma
Cunneen, Martin; Mullins, Martin; Murphy, Finbarr; Gaines, Sean
APPLIED ARTIFICIAL INTELLIGENCE
2019
VL / 33 - BP / 267 - EP / 293
abstract
The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot's famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence.
MENTIONS DATA
Computer Science
-
0 Twitter
-
0 Wikipedia
-
0 News
-
75 Policy
Among papers in Computer Science
Más información
Influscience
Rankings
- BETA VERSION