In 2016 Mercedes announced that its self-driving car would preserve the driver over the pedestrian.
Artificial intelligence is the fruit of a human-made algorithm. It is made to answer a need and realize specific tasks. This very same algorithm feeds on what its maker provides it and learns from it. In other words, artificial intelligence is the culmination of many choices made by a very human teacher. And as every human, it is imperfect.
Imperfect but perfectible. Because once again, an algorithm can still learn. We can see that about autonomous cars that do not cease getting smarter and safer. While facing new situations, autonomous cars are meant to become more secure than cars driven by people. Even though this technology is delayed, due to the gravity of how a mistake made by an algorithm appears much less acceptable than a mistake humanly-made, very soon, their presence will become unavoidable.
Even if most of the issues can be solved, there are others that, even though they are well known, have yet to find a satisfying answer. One problem lies on the ethics behind autonomous cars. Even if they will make the roads a lot safer, they won’t be one hundred percent safe. There will be accidents and damages, and once again, there will be a choice to make in the situations where a damage cannot be avoided.
While a satisfying settlement has yet to be found, this question led to a lot of thinking in the past years. The first aspect concerns the damage. If the car has to choose who is to be hurt or who is to be killed, who should that be?
In a 2012 Aschaffenburg case in Germany, a car continued driving on the road after the driver suffered a stroke because of the lane keeping assistant. It killed a mother and her child. The court ruled according to the premise that because the driver was taking the advantage of the system, he was liable even without mistake or negligence. Following this reasoning, should the driver be put in danger in the best interest of the one crossing the road because he is the one utilizing the system?
Another idea may rest on the philosophical ethical case described by Philippa Foot for the first time in 1967. If a trolley out of control was about to kill five people, and a person at the referral station could change the trolley’s course, ending in killing not five individuals but one who wasn’t in the original path, should the person at the referral station change the course of the trolley? This plot tries to confront ethics and utilitarianism. Here, it is about the number of casualties, but it can go further. If one is old and the other young or even if one is successful and can be considered “useful” to society and the other one is not.
It is tempting to apply utilitarianism to artificial intelligence over ethical reasoning, but it fails to receive consensus. “Moralmachines” is a website trying to gather as much intelligence on the subject as it can. It offers different scenarios, inviting individuals to give their opinion on who should live or die according to different sights, such as if they are driving the car, crossing the road on a red light, old, young, athletic or not. And as we can see the results, it appears that even though some ideas seem scouring, once again we fail to reach a complete agreement.
If the choice of who should live or die is a dead-end, then maybe should we instead argue about who should make that choice. Should it be the one developing the algorithm? The company selling the car? The driver? Or maybe the government in charge? Should it be submitted to universal suffrage? Whatever the solution ends up being, it will always be about finding an imperfect solution to a situation that will always rely on a casualty. Now the question remains in a scenario where not making a choice is already a choice. Who will make it?
Illustration; de Pixabay
Madison Keener
M2 Cyberjustice promotion 2019-2020
Sources :
https://img-9gag-fun.9cache.com/photo/am5p39y_460s.jpg
http://moralmachine.mit.edu/hl/fr
https://futurism.com/images/laws-and-ethics-for-autonomous-cars