Lethal Autonomous Weapons (LAWs) are considered as being the third revolution regarding weaponry warfare, after gunpowder and nuclear arm. In recent years, international organisations, medias and experts, among whom, Elon Musk (Tesla), Mustafa Suleyman (Alphabet) or even Stephan Hawking raised awareness regarding the dangers of LAWs and called for a ban. These weapons are developed by armies around the globe notably in China, South Korea, Russia ang the US, among others.
Terms used to describe these weapons suggest a conception of LAWs which is erroneous. First of all, the term “autonomous weapons” has no international agreed definition. A form of autonomy already exists in military devices (i.e: autopilot in aircraft) but they can be overridden by human action. However if we take the term “autonomy” a step further, it could mean that an AI is in charge of selecting the targets and opening fire without a human command or a kill switch or, said in other words, the possibility for humans to override the AI’s actions.
In 2012, the ONG Human Rights Watch issued a report which was aimed to clarify this notion. The report created three levels of autonomy:
- Human-in-the-loop: LAW can only select targets on its own. Opening fire will only occur under human command,
- Human-on-the-loop: LAW can select targets and open fire on its own under human oversight. LAWs are equipped with a kill switch allowing humans to override its actions,
- Human-out-of-the-loop: LAW can select targets and open fire without external human intervention (developed by using unsupervised AI).
Another misconception of LAW resides in the fact that it is commonly referred to as a “killer robot”. To be considered as a killer, in criminal law, one needs to be aware of his/her own actions and to have the willingness to commit the said action.
However, this moral behaviour is still reserved to sentient beings. Since LAWs are not sentient beings, it won’t make the difference between good or bad, fair or not, moral or not. Thus, LAWs cannot be considered as killers per se.
By the same token and since LAWs do not reach consciousness, it cannot be punished under criminal law for any infraction. One would need to be fit for trial, meaning that one would need to be aware of their own actions and to have been willing to commit them.
This crucial lack of moral conscience in LAWs addresses the question of liability: jus in bello or International Humanitarian Law requires, in time of warfare, that someone is held liable when civilians are killed. But who can be held liable for civilians’ deaths in case the wrong target was killed or if the system crashes?
Additionally, it would be hard to tell if the human-out-of-the-loop LAW took a poor decision because of a malfunction due to a lack of maintenance or if it derives from a malfunction of the algorithm or even a mistake from the developer (who coded the algorithm). Because human-out-of-the-loop LAWs, make their decision on their own, it is difficult to identify the person/organism responsible. For the aforementioned reasons, LAWs should not be considered as lawful weapons.
On the other hand, Army’s stance is that LAWs will provide them with strategic and tactical advantages on the battlefield such as: reaching areas that no soldier could, reacting faster and with more accuracy than soldiers who can have twinges of conscience or emotions. Researches in neuroscience showed that when under a great pressure or stress, the brain function responsible for self-control can shut down which may lead to sexual assault and other deviant behaviours.
The previous argument can lead to another contention: LAWs “are preferable on moral grounds to the use of human combatants”.  Preferable to human combatants both because it does not “shoot first [and] ask question later”. As LAWs don’t have the notion of self-preservation, it will not fire at civilians if they are not recognised as targets. Thus, (theoretically) reducing casualties.
LAWs also have an economic incentive. Indeed, as LAWs do not need to be paid, medical care or retirement pension. Considering that States are hiring soldiers and staff in law enforcement ensuing recent terrorist attacks in Europe, LAWs sounds, prima facie, as a solution to bypass the necessary increase of Department of Defence’s budget.
In an attempt to regulate LAWs, Human Rights Watch already tried, since 2014 to find an international pre-emptive agreement on a ban or drastic limitation of LAWs in armed conflicts. Since 2014, 6 informal meetings of experts were convened. Last summer (during the 6th meeting of experts) it was agreed upon that discussions, regarding a potential ban on LAWs, will start in 2019.
Surprisingly it’s mostly the private sphere which started raising awareness about this issue, even though LAWs would create a lucrative marketplace. These individuals foresee LAWs’ dangerousness and possible counter-productive effect, regarding AI technology, which could have a negative effect and taint the public opinion, possibly preventing AI’s expansion in other areas.
However, no international agreement will be found if countries do not recognise the dangerousness of the technology. Last September at a UN meeting Russia, USA, Australia, Israel and South Korea blocked the negotiations, regarding a consensus on a ban, which were dead before arrival. These countries allegedly wanted to “explore potential advantages or benefits to developing and using LAWs systems”.
Master 2 Cyberjustice – promotion 2018-2019