lunedì, Dicembre 2, 2024
Uncategorized

An overview on human rights, artificial intelligence and autonomous weapons systems

Over the past decade, the radical progress in artificial intelligence has made possible the development and deployment of fully autonomous[1] weapons systems. Once activated, these systems allow to select, attack, injure or even kill human targets, being able inter alia to operate without effective human control. These weapons systems are often referred to as Lethal Autonomous Robotics (LARs), Lethal Autonomous Weapons Systems (LAWS) and, more comprehensively, Autonomous Weapons Systems (AWS). The rapid development of these weapons systems could, on one hand, change the entire nature of warfare while, on the other, dramatically alter the conduct of law enforcement operations and raises extremely serious human rights concerns[2].

According to the US 2012 Directive on Autonomy in Weapons Systems, AWS can be defined as “any system that is capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision-making”[3]. There is no denying that their deployment and utilization would be potentially harmful for a variety of human rights, threatening not only the right to life, but also the prohibition of torture and other cruel, inhuman or degrading treatment or punishment and the right to security and dignity of individuals.

Taking into consideration the Universal Declaration of Human Rights (UDHR)[4], Article 1 provides that all human beings are born free and equal in dignity and rights and Article 3 establishes the right of everyone “to life, liberty and security of person”. That no-one may be arbitrarily deprived of his or her life is a fundamental rule of international human rights law, which is further enshrined by Article 6.1[5] of the International Covenant on Civil and Political Rights (ICCPR)[6]. Moreover, also the right not to be subject to torture or other inhuman treatment is well outlined in the same declaration[7].

Given such legal framework, if on the one hand the fact of delegating lethal decisions entirely to machines, with humans relegated to mere targets, constitutes de facto a violation of human dignity, on the other a question of a purely ontological nature arises. In short, it is possible to establish that a machine is capable of violating human rights, despite being unable to have within itself an awareness of what is right and what is wrong, nor to have a measure of those properly human sensations such as physical pain or suffering? Hence the doubt of being able to effectively qualify the conduct put in place by the AWS as formally contrary to the existing rules, notwithstanding the fact that, substantially, they do represent a threat to the abovementioned range of rights.

Technological evolution inevitably affects also the methods of armed conflict and the weapons used therein. Along these lines, the application of “intelligent” systems to war instrumentation had a powerful advance already towards the end of the last century (consider, for example, the so-called guided bombs, whose first prototype BOLT-117[8] dates back to the late Sixties). The advent of unmanned or human-replacing systems has meant that the person launching the weapon no longer needs to be physically present at the place and time when it is released. The first generation of these unmanned systems are remote-controlled weapons systems, such as armed drones.

Here comes the clear connection between AWS and previous issues inherent to technology and human rights. In fact, the emergence of autonomous weapons is making it possible for humans not only to be physically absent from the point of release of force, as with armed drones, but to be “psychologically” absent as well, to the extent that they do not take the on the-spot decision to direct and open fire. The actual release of force takes place elsewhere and sometime after the system was activated. There is thus a process of increasing depersonalisation in the use of force through unmanned systems – already present in the case of remote-controlled weapons, but taken to the next level with autonomous weapons[9].

Furthermore, the total absence of human intervention that AWS bring along would appear not to be in compliance also with the fundamental principles set out by human-rights-related legislation[10], such as the principle of “necessity”, in force of which the use of force should be strictly limited to those situations where it appears to be utterly necessary and “proportionality”, according to which the interest harmed should not exceed the interest protected. Indeed, proportionality and necessity criteria should be subject to an evaluation carried out by human beings, not machines.

In view of the above, there are several ways – of descending rigidity – to deal with AWS-related issues. The first and hardest one would be an update of the International Humanitarian Law (IHL) with the purpose of including a specific prohibition of AWS (at least lethal ones), together with the establishment of the principle that taking a human life requires an informed and considered human judgement. In this very sense, European Union has taken its first steps to start international negotiations on a legally binding instrument prohibiting lethal autonomous weapon systems in September 2018[11].

Instead, if in the debate between deploying autonomous weapons or banning them the former option prevailed, proper measures should be taken to ensure the compliance with the aforementioned range of rights and the consequent protection of the right holder – individuals. This would be viable only with a joint effort concerning, on one hand, a strict revision of AWS programming software[12] (e.g. avoiding machine bias as the effect of erroneous assumptions in machine learning processes) and, on the other, an update of the existing law. Regarding the software technical features, experts say that algorithmic biases cannot be entirely mitigated; arguably, the only way to eliminate all of them would be to use a random number generator with no preferences and no constraints. Still, it is truly important that, when confronting bias, there is a clear awareness of which should be absolutely mitigated in an autonomous weapon system[13].

Of course, responsibility for mitigating unwanted algorithmic biases does not rest with a single actor. In fact, there are at least three categories of actors that should be held liable: program developers, acquirers and regulators (including international policymakers). This reasoning opens the way to the second point, namely the updating of legislation, but also to a wider consideration: who will assume legal and moral responsibility if autonomous machines use force in a way which would normally be considered a war crime and a breach of international law or when, due to a malfunction or unforeseen circumstances, the wrong target is struck or there are excessive civilian casualties? There is clearly a lack of accountability which is, traditionally, a strictly human affair and to which regulators really need to focus their attention[14].

Lastly, “if the international legal framework has to be reinforced against the pressures of the future, this must be done while it is still possible”[15]. Therefore, regulators should definitely take into account the need for a certain amount of human intervention and push for a semi-autonomous system. In this regard, the notion of “meaningful human control[16][17]” should be developed as a guiding principle not only for the use of AWS, but for the use of artificial intelligence in general; and not merely focusing on isolated uses of such technologies but also on the role of technology as such in our future. Allowing technology not only to supplement but indeed to replace human decision-making would undermine the very reason why life is valuable in the first place.

In order to complete the picture, it is important to clarify that a correct functioning of the (non-lethal) AWS could also bring with it a series of positive implications, first of all the more precise identification of military objectives and the consequent better protection for civilians. This utopian premise is useful for understanding that the interests at stake are many and difficult to balance among themselves.

A valid compromise could therefore see the use of semi-autonomous weapons – for which, therefore, a meaningful human control is foreseen – and whose software are developed with the precautions referred to above. Furthermore – it is good to stress it – the balance must be based on a clear and transparent regime of accountability among all the players involved, so as to at least partially mitigate that growing sense of depersonalisation of the use of force that would represent a degeneration of the whole system[18]. Referring to the formal doubt raised before (namely, the possible qualification of the conduct put in place by the AWS as formally contrary to the existing rules), it is appropriate that the updating of the legislation also implies the explicit addition of the conducts which, although committed by autonomous weapons, constitute a violation of the human rights mentioned. However, should even one of these characteristics be missing, it would be better to postpone the adoption of AWS until the mankind is mature enough.

[1] The term ‘autonomous’ is used by engineers to designate systems that operate without direct human control or supervision. Engineers also use the term ‘automated’ to distinguish unsupervised systems or processes that involve repetitive, structured, routine operations without much feedback information (such as a dishwasher), from ‘robotic’ or ‘autonomous’ systems that operate in dynamic, unstructured, open environments based on feedback information from a variety of sensors (such as a self-driving car). Regardless of these distinctions, all such systems follow algorithmic instructions that are almost entirely fixed and deterministic, apart from their dependencies on unpredictable sensor data, and narrowly circumscribed probabilistic calculations that are sometimes used for learning and error correction.

[2] Autonomous Weapons Systems: Five Key Human Rights Issues for Consideration, Amnesty International Publications (2015), p. 5 https://www.amnestyusa.org/reports/autonomous-weapons-systems-five-key-human-rights-issues-for-consideration/

[3] P. Asaro, On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal Decision-making, International Review of the Red Cross (2012), Volume 94, Number 886, p. 690

[4] Universal Declaration of Human Rights (UDHR), adopted on 10th December 1948

[5]“Every human being has the inherent right to life. This right shall be protected by law. No one shall be arbitrarily deprived of his life.”

[6] International Covenant on Civil and Political Rights. Adopted by the General Assembly of the United Nations on 19th December 1966

[7] Article 5 of UDHR

[8] BOLT-117 (BOmb, Laser Terminal-117), retrospectively re-designated as the GBU-1/B (Guided Bomb Unit) was the world’s first laser-guided bomb (LGB)

[9] C. Heyns, Autonomous weapons in armed conflict and the right to a dignified life: an African perspective, Institute for International and Comparative Law in Africa, University of Pretoria Press Office (2016) p. 2

[10] N. Tsagourias, Fundamental Principles of International Humanitarian Law, Cambridge University Press (2018), p. 39

[11] European Parliament resolution of 12 September 2018 on autonomous weapon systems, Official Journal of the European Union

[12] J. M. Kessel, Killer Robots Aren’t Regulated. Yet., New York Times (Dec. 2019). “Killing in the Age of Algorithms” is a New York Times documentary examining the future of artificial intelligence and warfare.

[13] For instance, avoiding that the AWS are not more likely to hit target with specific features that would result being discriminatory against one or more categories of individuals (e.g. on the basis of skin colour)

[14] R Crootof, War Torts: Accountability for Autonomous Weapons, University of Pennsylvania Law Review (2016), p. 1347

[15] Quoted by Christof Heyns

[16] UK-based NGO Article 36, drawing on the IHL principle of humanity, first proposed that the standard for the acceptability of autonomous weapons be the exercise of “meaningful human control” over each individual attack. Autonomous weapons that do not meet that standard – in other words, fully autonomous robots – should be banned.

[17] Killer Robots and the Concept of Meaningful Human Control Memorandum to Convention on Conventional Weapons (CCW) Delegates

[18] A. Sharkey, Autonomous weapons systems, killer robots and human dignity, Ethics and Information Technology, Springer Link (2018)

Giacomo Bertelli

Nato a Genova nel 1992, si laurea in Giurisprudenza nel 2017 ed è abilitato alla professione forense dal 2021. Da sempre amante di cinema, sport, musica ed attualità politico-economica, durante gli anni universitari sviluppa un forte interesse per la tecnologia e l'informatica giuridica. Ha conseguito nel 2020 il master LL.M. in Law of Internet Technology presso l'Università Luigi Bocconi, incentrato principalmente su proprietà intellettuale, protezione dei dati personali e diritto della concorrenza nel mondo digitale. Dopo un'esperienza di un anno presso lo European Patent Office, dove si è occupato principalmente di brevetti, design, intelligenza artificiale e strategie legate alla proprietà industriale, attualmente lavora nel dipartimento legale di Google Italy.

Lascia un commento