If the goal was to make a splash, they succeeded.
But if this is supposed to be a serious proposal, it’s positively idiotic.
This minth Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor of the Univerity of Bloogna published a description of “the ethical knob”, which adjusts the behavior of an automated vehicle.
Specifically, the “knob” is supposed to set a one-dimensional preference whether to maximally protect the user (i.e., the person setting it) or others. In the event of a catastrophic situation where life is almost certain to be lost, which lives should the robot car sacrifice?
Their paper is published in Artificial Intelligence and Law, and they have a rather legalistic approach. In the case of a human driver, there are legals standards of liability that may apply to such a catastrophe. In general, in the law, choosing to harm someone incurs liability, while inadvertant harm is less culpable.
Extending the principles to AI cars raises the likelihood that whoever programs the vehicle bears responsibility for its behavior, and possibly liability for choices made by his or her software logic. Assuming that software can correctly implement a range of choices (which is a fact not in evidence), the question is what should designers do?
The Bologna team suggests that the solution is to push the burden of the decision onto the “user”, via a simple, one-dimensional preference for how the ethical dilemma should be solved. Someone (the driver? the owner? the boss?) can choose “altruist”, “impartial”, or “egoist” bias in the life and death decision.
This idea has been met with considerable criticism, with good reason. It’s pretty obvious that most people would select egoist, creating both moral and safety issues.
I will add to the chorus.
For one thing, the semantics of this “knob” are hazy. They envision a simple, one-dimensional preference that is applied to a complex array of situations and behavior. Aside from the extremely likely prospect of imperfect implementation, it isn’t even clear what the preference means or how a programmer should implement the feature.
Even more important, it is impossible for the “user” to have any idea what the knob actually does, and therefore to understand what the choice actually means. It isn’t possible to make an informed decision, which renders the user’s choice morally empty and quite possibly legally moot.
If this feature is supposed to shield the user and programmer from liability, I doubt it will succeed. The implementation of the feature will surely be at issue. Pushing a pseudo-choice to the user will not insulate the implementer from liability for how the knob works, or any flaws in the implementation. (“The car didn’t do what I told it to.”, says the defendant.)
The intentions of the user will also be at issue. If he chooses ‘egoist’, did he mean to kill the bystanders? Did he know it might have that effect? Ignorance of the effects of a choice is not a strong defense.
I’m also not sure exactly who gets to set this knob. The authors use the term “user”, and clearly envision one person who is legally and morally responsible for operating the vehicle. This is analogous to the driver of a vehicle.
However, the “user” is actually more of a passenger, and may well be hiring a ride. So who says who gets to set the knob? The rider? The owner of the vehicle? Someone’s legal department? The terms and conditions from the manufacturer? The rules of the road (i.e., the T&C of the highway)? Some combination of all of the above?
I would imagine there would be all sorts of financial shenanigans arising from such a featuer. Rental vehicles charging more for “egoist” settings, with the result that rich people are protected over poor people. Extra charges to enable the knob at all. Neighborhood safety rules that require “altruist” setting (except for wealthy or powerful people). Insurance companies charging more for different settings (though I’m not sure how their actuaries will find the relative risks). And so on.
Finally, the entire notion that this choice can be expressed in a one-dimensional scale, set in advance, is questionable. Setting aside what the settings mean, and how they should be implemented, the notion that they can be set once, in the abstract, is problematic.
For one thing, I would want this to be a context sensitive. If I have children in the car, that is a different case than if I am alone. If I am operating in a protected area near my home, that is a different case than riding through a wide open, “at your own risk” situation.
Second, game theory makes me want to have a setting to implement tit-for-tat strategy. If I am about to crash into someone set at ‘egoist’, then set me to ‘egoist’. If she is set to ‘altruist’, set me to ‘altruist’, too. And so on. (And, by the way, shouldn’t there be a visible indication on the outside so we know which vehicles are set to kill us and which ones aren’t?)
This whole thing is such a conceptual mess. It can’t possibly work.
I really hope no one tries to implement it.
- Giuseppe Contissa, Francesca Lagioia, and Giovanni Sartor, The Ethical Knob: ethically-customisable automated vehicles and the law. Artificial Intelligence and Law, 25 (3):365-378, 2017/09/01 2017. https://doi.org/10.1007/s10506-017-9211-z