giovedì, Aprile 18, 2024
Uncategorized

AI draft Regulation: towards a new approach on biometric surveillance?

a cura di Federica Paolucci e Giacomo Bertelli

Introduction

On April 21, the European Commission presented the Proposal for a regulation laying down harmonised rules on Artificial Intelligence and amending certain Union legislative acts, in line with the current Digital Strategy. This new regulation, which follows the proposals put forward in October 20201 and the White Paper2 published in February 2020, aims at the ambitious goal of making the EU a pole of attraction for the development of reliable and trustworthy AI.

As the President of the Commission Ursula Von der Leyen herself testified, although AI certainly represents a massive opportunity – also in relation to the Next Generation EU plan, which will contribute to strengthening the European Union’s excellence in AI with an investment of around EUR 150 billion – citizens need technologies they can trust and, precisely for this reason, the new rules establish high security standards proportional to the level of risk.

The first ever legal framework for AI in Europe has thus seen the light of day, with EU Institutions aligned in terms of advocating a ‘human-centric’ approach to AI regulation. A plan that, in coordination with the Member States, is aimed at guaranteeing the security and fundamental rights of citizens and businesses, while at the same time strengthening the adoption of AI and investment in the sector throughout the Union.

Structure and prohibited AI practices

With reference to the structure of the Proposal, the latter is divided into XII Titles, as follows:

Title I: Scope and definitions

Title II: Prohibited AI practices

Title III: High-risk AI systems

Title IV: Transparency obligations for certain AI systems

Title V: Measures to support innovation

Title VI, VII, VIII: Governance and implementation

Title IX: Codes of conduct

Titles X, XI, XII: Final provisions

Similarly to what has already been experienced with General Data Protection Regulation (EU 679/2016), this Proposal is also based on a risk assessment approach3. Some AI systems pose a risk that is deemed unacceptable and will therefore be prohibited as per Title II. These include social scoring systems, the ‘scores’ given by governments such as China to assess citizens’ trustworthiness, but also all those systems that are able to manipulate opinions, exploit and target people’s vulnerabilities, as well as those aiming at mass surveillance (‘There is no room for mass surveillance in our society,’ said Vice-President Margrethe Vestager).

Scope and definitions

As an attempt to regulate an emerging technology before it becomes ‘mainstream’, the regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) users of AI systems located within the Union; (c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union (Article 2). It is therefore plausible to assume that these rules will have far-reaching implications for major technology companies including Amazon, Google, Facebook and Microsoft4, which have poured huge resources into developing artificial intelligence, but also dozens of other companies that use software to develop medicines, underwrite insurance policies and conduct credit risk assessments.

Moreover, for the purpose of the regulation, ‘artificial intelligence system’ means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

High-risk AI systems

As mentioned above, Title II establishes a list of prohibited AI systems. Other systems are instead considered to be ‘high risk’: these are subject to prescriptive rules that echo the accountability approach that characterises GDPR: risk assessment; high quality of datasets; traceability of results; detailed documentation; human oversight; high level of robustness, security and accuracy.

The rules contained within the Proposal would, therefore, place limits on the use of artificial intelligence in a spectrum of activities, ranging from recruitment processes, bank lending, school selections and exam scoring. They would also restrict the use of artificial intelligence by law enforcement and judicial systems – areas considered ‘high risk’ because they could threaten people’s security or fundamental rights.

Some uses, on the other hand, would be “banned” altogether, including facial recognition in public spaces, but only for law enforcement purposes: as a matter of fact, the Proposal refers to real time biometric surveillance as an activity to be banned, but it provides at the same time with a list of exceptions.

Governance and penalties

Finally, it is important to underline that the regulation would propose the establishment of governance systems at both EU and national level (Article 56). At EU level, the proposal establishes a European Artificial Intelligence Board (the ‘Board’), composed of representatives of the Member States and the Commission. Once created, the Board will facilitate smooth, effective and harmonised implementation of the Regulation by contributing to the effective cooperation of national supervisory authorities and the Commission.

At national level, Member States will have to designate one or more competent national authorities, including the national supervisory authority, to supervise the application and enforcement of the Regulation. The European Data Protection Supervisor will act as the competent authority for the supervision of the institutions, agencies and bodies when they fall within the scope of this Regulation.

Following the approach already adopted by the GDPR, administrative sanctions are envisaged in the event of violation or non-compliance with the rules contained in the regulation: in particular, such sanctions may amount to up to EUR 30 million or up to 6% of global annual turnover (Article 71).

The Proposal is expected to enter into force in the second half of 2022 during a transition period to develop the standards and make the governance structures operational.

Towards a fundamental rights-oriented approach

Artificial Intelligence as such was never designed to monitor people: it depends on what humans teach it. As Isaac Asimov’s first Robotic Law says “A robot may not injure a human being or, through inaction, allow a human being to come to harm”5: this is the approach that the European Commission seems to be following in this Proposal. The aspects touched by the document are numerous: from software in self-driving cars to algorithms used to vet job candidates and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence (see Rec. 15), in particular when related to the use of biometric data.

It is very important to notice the path the Commission is undertaking: the Proposal is not just aiming to frame a new regulatory code for AI developers, but aims to build a brand-new approach that will reflect both on the technical and the practical aspects of using it. The Commission is reading the technological issues in the light of the EU Charter’s fundamental rights and principles seeking to address the various sources of risks through an ex ante assessment. As Pollicino and De Gregorio pointed out, the idea seems to be of “a digital, constitutionally tempered capitalism in which proportionality and reasonableness are the polar stars”6. The Proposal aims to promote the protection of the rights protected by the Charter, namely the right to human dignity (Article 1), respect for private life and protection of personal data (Articles 7 and 8), non-discrimination (Article 21) and equality between women and men (Article 23). It is interesting and very important that the Proposal takes into account also the necessity to prevent chilling effects7 on the exercise of rights that might be affected by AI surveillance deployment at various level, as for instance on the rights to freedom of expression (Article 11) and freedom of assembly (Article 12), to ensure protection of the right to an effective remedy and to a fair trial, the rights of defence and the presumption of innocence (Articles 47 and 48), as well as the general principle of good administration.

As Vestager explains “today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centred Artificial Intelligence, and the use of it”. This approach is creating a new benchmark at the level of digital constitutionalism meaning the reaction of constitutional law to the threats to the fundamental rights coming both from technology and from the exercise of private powers8. An approach that is possible to find also in the Proposal for the Digital Services Act9.

The EU Institutions believe that this kind of assessment will enhance the horizontal protection of the fundamental rights and freedoms, but also will produce positive consequences on market level. As it is written in Rec. 5 of the Proposal: “by laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament”.

Biometric surveillance: is the Commission willing to ban it?

The described approach is more and more relevant when the Commission takes into account specific and controversial uses of AI as listed in Art. 5 of the Proposal, namely:

  1. the placing on the market, putting into service or use of an AI system that:

  1. deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

  2. exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

  3. by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics.

  1. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement.

One specific use of AI fits in both the high risk and the prohibited categories. It is remote biometric identification10. Biometric identification can be used for many purposes: some of them are not problematic. For instance and for the purposes of the Proposal, when it is used at border controls by customs authorities, or whenever we are asked to sign with our fingerprints or by face recognition.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Exceptions are defined and regulated by Art. 5:

  • the targeted search of a missing child;

  • the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack, and

  • the localisation, identification or prosecution of a perpetrator or suspect of a serious criminal offence. Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the databases searched.

Even if the Proposal is framing for the very first time an issue that is debated at the moment all over the world, it does not seem to really touch the point: it does not go far enough to protect people from the wide range of biometric mass surveillance practices that we already see in motion across Europe. As a result, the proposal contradicts itself by permitting certain forms of biometric mass surveillance that it has already acknowledged are incompatible with our fundamental rights and freedoms in the EU. Moreover, the framed exceptions for biometric remote surveillance are opaque in the wordings and leave much to a case-by-case interpretation. To give a practical example, localisation and identification are very different concepts and vaguely open to authority. This recreates many of the problems in existing data protection laws that led us to call for a new ban, and also undermines legal certainty for everyone.

Last but not least, it is unlikely that “Commission is banning facial recognition” as many authors are currently arguing. From the wording of the Proposal the recalled prohibition only applies to law enforcement purposes, failing to mention equally invasive and ubiquitous uses by other government authorities as well as by private companies11. It seems here that the Legislator is looking at the finger rather than the moon: the issues with facial recognition and “real time” tools are not just on the level of law enforcement purposes as far as the one conducted by private companies nevertheless constitutes biometric mass surveillance. Eventually, the peculiarity of this ban is just on the “real time” facial recognition: this leaves room for continued biometric surveillance for post identification, meaning the use of already collected images from the infamous app Clearview AI12.

As states EDRI (European Digital Rights), a non-governmental organisation active against biometric surveillance, “whilst happy to see some prohibition (national law required, case-case authorizations) this proposal does not go far enough to ban biometric mass surveillance”13.

Conclusion

Whilst this is still and only a Proposal, the European Commission’s new findings clearly demonstrate a brave approach to AI. Even the choice of implementing this new path with a Regulation (rather than a Directive) is meaningful as far as for the first time AI will be regulated not by soft laws but by a directly applicable legislation that reflects “the need for a uniform application of the new rules”14. A combination of choices that should be appreciated and that will hopefully lead the EU to establish itself as a major player in the development of trustworthy AI technology. Here too, the comparison with the adoption of the GDPR comes naturally, as even then the change of approach from a more ‘laissez-faire’ directive to a more fundamental rights-oriented regulation turned out to be a model for the world to follow.

1 European Parliament Press Release, Parliament leads the way on first set of EU rules for Artificial Intelligence, Oct. 20, 2020

2 European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, Feb. 19, 2020.

3 J. Liboreiro, “The higher the risk, the stricter the rule’: Brussels’ new draft rules on artificial intelligence”, Euronews, Apr. 21, 2021 https://www.euronews.com/2021/04/21/the-higher-the-risk-the-stricter-the-rule-brussels-new-draft-rules-on-artificial-intellige

4 A. Satariano, “Europe proposes strict rules for Artificial Intelligence”, The NY Times, Apr. 21, 2021

5 Isaac Asimov quoted by M. O’Connell in Essere Una Macchina: Un Viaggio Attraverso Cyborg, Utopisti, Hacker e Futurologi per Risolvere Il Modesto Problema Della Morte, Adelphi Edizioni, 2018.

6 O. Pollicino e G. De Gregorio in “La Terza via europea per un capitalismo digitale ben temperato”, Il Sole 24 Ore, Apr. 22, 2021, https://www.ilsole24ore.com/art/la-terza-via-europea-un-capitalismo-digitale-ben-temperato-AEFbPy.

7 More inputs on “chilling effects” of facial recognition can be found in the study by INCLO titled “In Focus: Facial Recognition tech stories and rights harms around the world”, 2021, https://www.inclo.net/pdf/in-focus-facial-recognition-tech-stories.pdf.

8 G. De Gregorio, “The Rise of Digital Constitutionalism in the European Union”, SSRN Scholarly Paper, NY: Social Science Research Network, 2019. https://papers.ssrn.com/abstract=3506692.

10 Art. 3 (33) gives a new definition of biometric data: ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.

11 “Come l’Europa vuole regolamentare le intelligenze artificiali”, Il Post, Apr. 22, 2021, https://www.ilpost.it/2021/04/22/commissione-europea-intelligenza-artificiale/.

14 “Choice of the Instrument”, par. 2.4 of the Explanatory Memorandum, pag. 8.

Federica Paolucci

Federica Paolucci, è Dottoranda in Diritto Costituzionale Comparato  presso l'Università Commerciale Luigi Bocconi, dove ha avuto anche modo di approfondire gli aspetti relativi al diritto e alla tecnologia frequentando nell'a.a. 2020/2021 LLM in Law of Internet Technology.

Lascia un commento