sabato, Luglio 27, 2024
Uncategorized

AI in court, a case study under the lens of the Artificial Intelligence Act

AI in court, a case study under the lens of the Artificial Intelligence Act

Written by Ana Paula Gonzalez Torres.
This article is available also in Italian here.

  1. Artificial intelligence, “black box”, and XAI techniques

The field of artificial intelligence (AI) is scary because it’s complicated. Thus, studying it requires understanding basic concepts and then building on top of them. To get started, an “algorithm” could be understood as a sequence of instructions for solving a problem[1]. These instructions, ideally, are simple, clear, and unambiguous steps[2]. These are necessary to obtain a certain output for a certain input data within a given period of time[3]. However, sometimes algorithms are written in such an ambiguous and opaque way that it is difficult to rationalize the output[4]. In any case, algorithms are important because they are the building blocks on which artificial intelligence stands[5].

Instead, although there is no consensus, “artificial intelligence” usually refers to the scientific discipline developed within computer science. “Applied” AI refers to a set of algorithms supported by a chip (hardware) and data (representing the world and people) to solve a problem [6]. If AI systems can interact with their surroundings, they can be characterized as autonomous and adaptable [7]. Autonomous in as much as they are able to perform certain tasks in complex environments without constant user guidance[8]. Adaptable in as much as they are able to improve their performance by learning from experience.[9]

The “black box” problem refers to certain types of “opaque” technology that are not easily interpretable. Therefore, they are not open to scrutiny by users, sometimes not even to their developers. Such makes it difficult to edit, correct or explain their operation mechanisms[10].

In the realm of artificial intelligence, not all AI systems suffer from the “black box” problem. It is a problem that affects, in particular, the non-symbolic AI techniques like machine learning and its deep learning. Such because these techniques involve complex calculations with multiple interdependent variables, which sometimes independently redefine their own structures[11].

As a result, for humans, the operating mechanisms of AI systems that employ opaque techniques become too complex to make sense of it[12]. Fortunately, explainable AI”(XAI) techniques have been and are being developed. They are intended to address the need to explain AI models without sacrificing their performance[13]. In general, two approaches can be identified: 1) analysis of AI models based on their results, and 2) pre-established interpretable design of AI models.

The first approach is interested in a post hoc analysis of the results[14]. It can be conceptualized according to high and low perspectives. The high-level perspective studies how the AI model responds to certain constraints[15]. For example, once you remove a certain input data [16] or change a certain value in the calculations[17]. The low-level perspective studies how AI models process specific data[18]. This perspective is interested in how certain data can affect the production of outputs[19]. For example, biased input data that generate discriminatory outputs. This approach is modeled on the difference between “how” and “why.” “How” allows us to understand which elements were crucial for a certain output. For example, what data and rules influenced a classification task[20]. “Why” allows us to understand the result, for instance, in terms of causality[21]. In matters of AI, it is important to obtain a causal explanation because systems usually involve calculations with a large number of variables. Thus, it is important not to attach importance to only random relationships between variables.[22]

The second approach is interested in designing AI models. It focuses on privileging logics that facilitate the explanation of outputs[23]. This can be achieved by setting interpretability as the guiding value in the development of an AI model right from the start[24]. An AI model is interpretable if the outputs, correlations, and patterns gathered by the AI model can be presented in terms that humans can understand[25]. A limitation of this approach is that it affects performance. Such because, most of the time, only the simplest models allow one to understand the calculations and combinations of variables performed by the AI model[26].

In any case, in sociological terms, people are usually more suited to understand counterfactual explanations and retain more useful explanations that state the main causes of a decision. On the one hand, as related to the main causes of a decision, one can use result-based XAI techniques, which help determine which factors contributed to a certain output[27]. In this way, to explain an AI system, one can present a list of factors and specify their relevance for a particular output in a way that is understandable to the recipient[28]. For example, for a person without technical skills, a list of factors can be drawn up as: “the factors ‘salary,’ ‘home ownership’ and ‘criminal record’ helped produce the output ‘custody of the daughter to parent X.’ The most relevant was the ‘criminal record’ factor.”

On the other hand, counterfactual explanations are obtained by posing hypothetical questions opposite to what has actually happened[29]. For example, counterfactual questions can be set in the following way “what would be the result if the data were instead X?”. They are especially useful if one is trying to understand how to achieve a different result[30]. In these terms, counterfactual explanations make it possible to check the coherence between different decisions[31] and, where appropriate, also to challenge them.[32]

  1. Regulation of artificial intelligence systems

The proposal 2021/0106 presented on 21 April 2021, known as the “Artificial Intelligence Act” (AI Act) [33], was preceded by a series of communications[34], opinions[35], resolutions[36] , and guidelines[37]. Right from the start, great importance was given to transparency and high-risk scenarios. Such has led to a proposal modeled according to a “risk-based” approach.

Regardless of uncertainties around the concept of “artificial intelligence”, article 3(1) of the proposed AI Act defines an “artificial intelligence system” as a “software developed with one or more of the techniques and approaches” established in annex I of the AI Act.[38] Such annex identifies various techniques developed following different approaches, which could be thought of as statistical[39], symbolic[40], and non-symbolic[41]. Note that this “software” is characterized as a system because it “can, for a certain set of human-defined objectives, generate outputs such as content, forecasts, recommendations or decisions that affect the environments with which they interact”[42].

The decision to adopt such a broad definition is explained by the desire to remain “as technologically neutral and future proof as possible”[43]. In addition, the list of techniques and approaches “should be kept up-to-date”[44] by the European Commission. Thus, it shall have the power to adopt delegated acts “in order to update the list of techniques and approaches set out in Annex I [to the proposal for a Regulation 2021/0106], to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein”[45].

The risk-based regulation distinguishes between:

  1. unacceptable risk to which “prohibited artificial intelligence practices” are established, title II;
  2. high-risk AI systems, title III;
  3. non-high-risk AI systems, subject to title IV “transparency obligations for certain AI systems”;
  4. all AI systems that do not pose a high-risk, under title IX, are recommended to adopt codes of conduct.

This approach pays homage to the principle of proportionality by correlating the regulatory burden to the risk[46]. On the one hand, it sets out requirements and obligations for high-risk AI systems. On the other hand, it establishes marginal transparency obligations and voluntary codes of conduct for non-high-risk AI systems. This approach was developed within the New Legislative Framework (NLF), usually employed to regulate certain products such as medical devices[47]. According to this framework, “producers must ensure that their products comply with the relevant legislation”[48]. The philosophy of the NLF is that the manufacturer, having a detailed knowledge of the design and production process, is in the best position to fulfill legal obligations[49]. This approach has resulted in a proposal that sets most of the regulatory burdens on providers of high-risk AI systems[50].

Bearing this in mind, article 6 classifies high-risk AI systems in part by referring to annex III[51] of the proposed AI Act. This annex[52] specifies that AI systems used in the therein-listed areas are considered “high risk”. Among the various areas, we are interested in point 8, “[a]dministration of justice and democratic processes”. According to the provision, are categorized as “high risk” those “a) AI systems intended to assist a judicial authority in the search and interpretation of facts and law and in the application of the law to a concrete set of facts”. Thus, for example, if an AI system is used to suggest which regulation or judicial precedent is relevant in a given case or if an AI system simply assists in the search for case law in a database, the judges would be considered users of a high risk AI system. Consequently, judges would be subject to the dispositions of the AI Act.

It is important to point out that, at least under a formalistic reading of the AI Act, it seems that whenever an AI system is used in the judicial field for the “administration of justice”, then AI system providers will have to comply with requirements such as “transparency and provision of information to users”[53] and judge-users with obligations such as “use such systems in accordance with the instructions of use accompanying the systems”[54]. In the writer’s opinion, it seems worrisome to establish that providers of AI systems will determine to determine “instructions of use” by which users will have to abide. Especially if one combines the “black box” problem of the “black box” with situations in which the instructions of use will be the main means of understanding how to use, without necessarily understanding, AI systems establishing the outcome of court proceedings.

In particular, in the Italian legal context, there is the limit posed by article 101 of the Constitution, “i giudici sono soggetti soltanto alla legge”, meaning that judges are subject only to the law. Nonetheless, whenever judges cannot understand how an AI system processes regulation or judicial precedent, it seems reasonable to doubt whether the judges are really subject to the law or to the instructions of use established by providers.

In this regard, the proposed AI Act states that AI systems “shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately”[55]. At this point, à-mon-avis, to guarantee a judge-user remains respectful to constitutional limits and author of the final decisions. Then it is necessary to allow the judge-user to interpret the output the AI system suggests as a potential outcome of court proceedings. For this purpose, it could be helpful to use XAI techniques focused on results and combine them with local explanations. This approach would allow users to understand what factors produced a certain output in terms of main causes and their relevance to the particular situation at hand.

On the other hand, enabling judge-users to use AI systems “appropriately” would require a top-down approach. In which courts that plan on adopting AI systems actively require their chosen provider to privilege an interpretable design. For example, requiring providers to design systems with the option to pose counterfactual questions or facilitating global explanations related to the factors that the AI system considers relevant for a class of recipients. In such regard, article 13 of the proposed AI Act states that “high-risk AI systems shall be accompanied by instructions for use in an appropriate digital or non-digital format, which shall include concise, complete, correct and clear information that is relevant,  accessible and understandable to users”[56]. Among the information to be provided are specified, among others:

“(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:”[57]

“(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;”[58]

“(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system.”[59]

Note that, unlike the disposition in (b)(iii), (b)(v) is discretionary, as it is preceded by the words “where appropriate”. In the context of judge-users, for whom it is important to understand how AI systems process regulatory and jurisprudential data, it is regrettable to note how the proposed regulation makes this disclosure of information discretionary. Mainly because the disposition relates to the disclosure of input data information, in the context of AI systems, it is crucial to know how a model was training as it will, ultimately, inform judge-users’ decisions. In light of the dispositions of the proposed AI Act, all that remains is to recommend courts that are planning on employing AI systems to request their providers for such information. Either for their own assessments or to be able to effectively question the outputs of AI systems. Otherwise, it can be a free way for providers to remain in the dark by discretionally considering it “inappropriate” to grant information relating to “input, training, validation and testing data”. In such regard, arguing that providing information is too burdensome or infringing intellectual property rights cannot put a cap on transparency duties.

Finally, some de jure condendo considerations, in cases of automated decision-making (ADM) in which the decision-making has been delegated to a body that uses AI systems to arrive, automatically, at a decision. Under the AI Act, it is doubtful whether or not cases of ADM fall within the scope of the regulation. Indeed, these AI systems are not intended to “assist” but to decide tout court. A formalistic reading of the AI Act would grant an interpretation excluding AI systems employed in ADM from following the requirements and obligations of the proposed AI Act. It is with perplexity that I notice how the riskiest AI systems, as they could automatically lead to decisions that harm fundamental rights, are not first and foremost listed as prohibited practices within article 5. Even being able to argue that they may not be included among the high-risk AI systems is outrageous. For instance, following such an interpretation would mean that AI systems used in automated decision-making processes would only be subject to voluntary codes of conduct. Fortunately, art. 22 GDPR guarantees that you are not subject to decisions based solely on automated processing. However, given the importance of Regulation 2021/0106 as an “Artificial Intelligence Act”, it would be commendable, in future amendments, to prohibit AI systems aimed at producing automated decisions at the very minimum when they concern fundamental rights. At the very least, include increased transparency requirements and obligations in favor of users, even if ADM cases do not involve fundamental rights.

  1. The proposal for a regulation and the civil process

Although various projects aim to incorporate AI solutions in the administration of justice at lower[60] and higher[61] courts. I want to point your attention to a particular case in Genoa, Italy, which involves a university institution and raises interesting questions in light of the proposed AI Act.

The Project “Predictive Justice” sees the Court of Genoa and the Scuola Superiore Sant’Anna of Pisa[62] partnering to incorporate legal AI in judicial activities. First of all, it is commendable to see that the project strives for transparency by following a declared interpretable design approach. Like, “explainable ML” in the “Knowledge Discovery Process from Database”[63]. However, the project also uses deep learning, a particularly opaque technique, as an experiment channel when processing data “development of Named Entity Recognition (NER) algorithms based on Deep Learning”[64].

In detail, according to professor Comandé, the project is made up of various levels. Level 1, “[i]n the start-up phase, the project aims to analyze decisions with the corresponding files of trial courts according to the criteria and methodologies developed in the Observatory on personal injury, applicable to areas of litigation other than non-pecuniary damages”[65]. Level 2, “[t]he same materials are used also through techniques of machine learning to develop both tools for annotation and automatic extraction of information from legal texts”[66]. Level 3-4, development of “algorithms for analysis and prediction […] aims to recreate and mimic the legal reasoning behind the solution(s) adopted in the judgements by making predictable subsequent decisions on the same subject” [67]. “These tools should also help to explain the reasoning underlying each decision, while the development of suitable tools to explain the criteria defined by the developed AI (level 4) will be tested” [68]. Level 5, “efforts and results in the different levels of research and development will be traced back to the attempt to structure the analysis of the legal argument at such a level of abstraction and systematicity as to contribute to the simplification of all tasks” [69].

From the documents on their official websites and statements made at conferences, it appears that the project is at an initial phase of implementing computational techniques to extract information from legal documents. Therefore, there is still no adoption of AI systems by the Court of Genoa outside a database for searches of judicial precedents. However, by implementing ML techniques in the construction of databases and in light of the declared purpose of the project to “serve justice” and to be “allies of justice”[70], it would appear the project falls within Annex III point 8 “administration of justice”. Therefore, it would be a “high risk” AI system as it is used to assist a judicial authority in searching, interpreting, and applying facts and law. That being said, the Scuola Superiore Sant’Anna is a university institute that could categorize the project as falling into the category of scientific research. Interestingly, as it stands, scientific research institutes seem to be outside the scope of the proposed AI Act[71], which means that to the extent that the project is non-commercial (i.e., outside the market context), the AI systems developed by the Italian researchers appears to be subject only to “recognised ethical standards for scientific research”[72].

In such regard, even the researchers behind the project consider it to be outside the scope of the proposed AI Act[73]. However, they admit that the risk of undermining individual freedoms remains a concern as their project could produce prejudicial effects in concrete cases. For instance, “by couching judges to interpret in a certain way (by driving interpretation in a certain way), thus, making it a ‘high-risk'”[74]. Nonetheless, under the proposed AI Act, one of the criteria to be taken into account when assessing whether an AI system is among the “high-risk” systems is “the extent to which persons likely to suffer the damage or negative impact depend on the result produced by an AI-system, in particular, because for practical or legal reasons it is not reasonably possible to avoid that result”[75].

In any case, requirements such as article 13, “transparency and provision of information to users”, apply by default whenever a high-risk AI system provider intends to put the AI system on the market or into service[76]. It follows that if the project is to be considered “scientific research” because it is not meant to be commercially exploited in the market, then the requirements for high-risk AI systems would not apply. However, the fact that the project makes an AI system available to the Court of Genoa may be interpreted as “putting into service”. In such a case, the project could be categorized as aimed at producing a high-risk AI system and, thus, fall under the requirements and obligations of the proposed AI Act.

In particular, if the project intends to pursue the objectives of levels 3-4 on the development of algorithms for analysis and prediction, “1) Judges can evaluate litigation related to overlapping legal issues (based on similar legal rules, verifications, evidences) and thus facilitate the decision-making process”[77]. Then, à-mon-avis, this is a unique opportunity for a project conducted by a university institute to develop an AI system that incorporates tools that help explain the reasoning behind outputs informing the decision-making of judges. In terms of transparency, it is interesting to note that the project follows an interpretable design. For example, using XAI techniques such as Akoma Ntoso, a language commonly used for “technology-neutral electronic representations”[78] of legal documents, and “consensus algorithms” to establish an agreement between multiple parties around the purpose of a specific data point[79]. It is an exciting development, as it outlines other avenues for developing AI systems that are interpretable in human terms. In addition, the team at Scuola Sant’Anna has mentioned the idea of developing tools to “compare the ‘judge’s reasoning’ and the model reasoning. We can check if there is a discrepancy to understand why the model is wrong and why the model is wrong for that specific case law.”[80] In this case, one could use such explanations to check consistency between decisions[81] and even appeal decisions based on declared decisive factors[82].

In conclusion, although the use of XAI techniques by the “Predictive Justice” project is commendable, their techniques will have to provide, in practice, an explanation that is satisfactory from a sociological point of view. The involvement of AI systems in the judicial system can be successful if guided by a hybrid approach seeking satisfactory technical performance and interpretability of outputs by users.

  1. Conclusion

The field of artificial intelligence is particularly complicated as it employs mathematical, statistical, and biological concepts. Although legal professionals usually have social science training, it is their responsibility to be aware of the changes taking place in the world. Innovation is taking us all for a ride by developing at a breaking kneck speed. However, jurists are called on by their commitment to the rule of law to raise the issues that new technologies may pose to the social fabric. It is even more important to do so when technologies like AI solutions start to be implemented in the administration of justice and touch on fundamental rights. Thus, legislators, judges, lawyers, and legal experts have the task of informing themselves and proposing solutions to current and future issues that face our society and profession. Bottom line, the demands for a faster legal system cannot be privileged at the expense of fundamental principles like transparency nor affect decision-makers’ ability to understand and challenge the measures that affect fundamental rights.

[1] A. Levitin, Introduction to the design and analysis of algorithms, III and. New York, Pearson, 2011, 13 p.

[2] A. Longo, G. Scorza, Intelligenza artificiale: l’impatto sulle nostre vite, diritti e libertà, Milano, Mondadori Università, 2020, 21 p.; M. Vimalkumar, A. Gupta, D. Sharma, Y. Dwivedi, Understanding the Effect that Task Complexity has on Automation Potential and Opacity: Implications for Algorithmic Fairness, in “AIS Transactions on Human-Computer Interaction”, 2021, v. 13, n. 1, 107 p.

[3] A. Levitin, Introduction to the design and analysis of algorithms, cit., 13 p.

[4] B. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, L. Floridi, The ethics of algorithms: Mapping the debate, in “Big Data & Society, 2016, n. 3, pp. 1-21.

[5] L. Floridi, The 4th Revolution. How the Infosphere is Reshaping Human Reality, Oxford, Oxford University Press, 2014, 156 p.; J. Nieva-fenoll, Inteligencia artificial y proceso judicial, Madrid, Marcial Pons, 2018, 21 p.

[6] A. Longo, G. Scorza, Intelligenza artificiale: l’impatto sulle nostre vite, diritti e libertà, cit., pp.26-27.

[7] B. Friedman, H. Nissebaum, Bias in Computer Systems, in “ACM Transactions on Information Systems”, v. 14, n. 3, 1996, 335 p.

[8] University of Helsinki, Reaktor, How Should We Define AI?, in https://course.elementsofai.com/1/1”.

[9] University of Helsinki, Reaktor, How Should We Define AI?, cit.

[10] P. Comoglio, Nuova tecnologie e disponibilità Della prova. L’accertamento del fatto nella diffusione delle conoscenze, cit., 335 p.

[11] A. Joshi, Machine Learning and Artificial Intelligence, cit., 164 p.

[12] B. Mittelstadt, C. Russell, S. Wachter, Explaining Explanations in AI, in “Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19)”, 2018, 283 p.

[13] D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, G. Yang, Explainable artificial intelligence (XAI), in “DARPA/I20,” 2019, 52 p.

[14] A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, in “Information Fusion”, 2020, v. 58, 94 p.

[15] A. Rai, Explainable AI: from black box to glass box, cit., 138 p.

[16] W. Samek, A. Binder, G. Montavon, S. Lapuschkin, and K. Müller, Evaluating the visualization of what a deep neural network has learned, in “IEEE Transactions on Neural Networks and Learning Systems”, 2017, v. 28, 2665 p.

[17] M. Wick, W. Thompson, Reconstructive expert system explanation, in “Artificial Intelligence”, 1992, v. 54, n. 1-2, 58 p.

[18] A. Rai, Explainable AI: from black box to glass box, cit., 138 p.

[19] P. Kim, Data-driven discrimination at work, in “Willian & Mary Law Review”, 2017, v. 58, n. 3, 931 p.

[20] D. Doran, S. Schulz, T. Besold, What does explainable AI really mean? A new conceptualization of perspectives, in “CEUR Workshop Proceedings”, 2018, 2071 p.

[21] A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, F. Herrera, Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, cit., 101 p.

[22] R. Roscher, B. Bohn, M. Duarte, J. Garcke, Explainable Machine Learning for Scientific Insights and Discoveries, in “IEEE Access”, 2020, v. 8, 42222 p.

[23] W. Samek, G. Montavon, S. Lapuschkin, C. Anders, K. Müller, Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications, in “Proceedings of the IEEE”, 2021, v. 109, iss. 3, pp. 247-278.

[24] J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, B.Yu, Definitions, methods, and applications in interpretable machine learning, in “Proceedings of the National Academy of Sciences of the United States of America”, 2019, pp. 22079 p.

[25] W. Samek, G. Montavon, S. Lapuschkin, C. Anders, K. Müller, Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications, cit., pp. 247-278.

[26] S. Wachter, B. Mittelstadt, C. Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, in “Harvard Journal of Law & Technology”, 2018, v. 31, n. 2, 845 p.

[27] R. Roscher, B. Bohn, M. Duarte, J. Garcke, Explainable Machine Learning for Scientific Insights and Discoveries, cit., 42222. p.

[28] R. Roscher, B. Bohn, M. Duarte, J. Garcke, Explainable Machine Learning for Scientific Insights and Discoveries, cit., 42222 p.

[29] S. Wachter, B. Mittelstadt, C. Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, cit., 850 p.

[30] B. Mittelstadt, C. Russell, S. Wachter, Explaining Explanations in AI, cit., 281 p.

[31] F. Doshi-Velez, M. Kortz, R. Budish, C. Bavitz, S. Gershman, D. O’Brien, K. Scott, J. Waldo, D. Weinberger, A. Weller, A. Wood, A, Accountability of AI Under the Law: The Role of Explanation, cit., 14 p.

[32] S. Wachter, B. Mittelstadt, C. Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, cit., 860 p.

[33] Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, Brussels, 21.4.2021, COM(2021) 206 final, 2021/0106(COD).

[34] European Commission, A Digital Market Strategy for Europe, COM (2015) 192 final; the next communication touching upon the subject, by the European Commission, was in January 2017, Building a European Data Economy, COM(2017) 9 final; European Commission, Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions. Artificial Intelligence for Europe, COM(2018) 237 final.

[35] European Economic and Social Committee Opinion. Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society (own-initiative opinion), (INT/806-EESC-2016-05369-00-00-AC-TRA), in “Official Journal of the European Union”, C 288.; European Economic and Social Committee 526th EESC Plenary Session of 31 May and 1 June 2017, Opinion of the European Economic and Social Committee on ‘Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society’ (own-initiative opinion) (2017/C 288/01).

[36] European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).

[37] High-level expert group on artificial intelligence, Ethics Guidelines for Trustworthy AI, Brussels, 2019; U. von der Leyen, A Union that strives for more. My agenda for Europe. Political Guidelines for the Next European Commission 2019-2024; European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65final.

[38] Annex I, Artificial Intelligence Techniques and Approaches referred to in Article 3, in Annexes to the Proposal for Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final, Brussels, 2021.

[39] Annex I, cit., lt. c).

[40] Annex I, cit., lt. b).

[41] Annex I, cit., lt. a)

[42] Proposed AI Act, cit., art. 3(1).

[43] Proposed AI Act, cit., “5.2. Detailed explanation of the specific provisions of the proposal,” point 5.2.1.

[44] Proposed AI Act, cit., recital 6.

[45] Proposed AI Act, cit., art. 4.

[46] Commission Staff Working Document. Executive Summary of the Impact Assessment Report. Accompanying the Proposal for a Regulation of the European Parliament and of the Council. Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, SWD(2021) 85 final, Brussels.

[47] M. Veale, F. Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act, in “Computer Law Review International”, v. 22, n. 4, 2021, 100 p.

[48] Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82).

[49] M. Veale, F. Zuiderveen Borgesius, Demystifying the Draft EU Artificial Intelligence Act, cit., 101 p.

[50] Proposed AI Act, cit., art. 16.

[51] Proposed AI Act, cit., art. 6(2).

[52] Annex III, High-Risk AI Systems referred to in Article 6(2), in Annexes to the Proposal for Regulation of the European Parliament and of the Council. Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final, Brussels, 2021.

[53] Proposed AI Act, cit., art. 13.

[54] Proposed AI Act, cit., art. 29(1).

[55] Proposed AI Act, cit., art. 13(1).

[56] Proposed AI Act, cit., art. 13(2).

[57] Proposed AI Act, cit., art. 13(3).

[58] Proposed AI Act, cit., art. 13(3)(iii).

[59] Proposed AI Act, cit., art. 13(3)(v).

[60] See Tribunale di Firenze, Giustizia 4.0. Una metodologia strategia, innovativa e replicabile per risolvere i contenziosi riducendo i tempi della giustizia, in “https://www.forumpachallenge.it/soluzioni/giustizia-semplice-40#”.

[61] Accordo Quadro tra La Corte di Cassazione, Centro Elettronico di Documentazione (C.E.D.), con sede in Roma, presso il Palazzo di Giustizia, piazza Cavour (di seguito anche “C.E.D.”), rappresentata dal Primo Presidente Pietro Curzio e La Scuola Superiore Universitaria Superiore IUSS Pavia, C.F. 96049740184, P.IVA n. 02202080186, con sede in Pavia, presso il Palazzo del Broletto, Piazza della Vittoria n. 15 (di seguito anche “IUSS”), rappresentata dal Rettore Prof. Riccardo Pietrabissa, in “https://www.cortedicassazione.it/cassazione-resources/resources/cms/documents/ACCORDO_TRA_CED_E_SCUOLA_UNIVERSITARIA_SUPERIOR”.

[62] R. Casaluce, Exploring Trial Courts Legal Databases: Part 2 – Length of legal documents, in “https://www.predictivejurisprudence.eu/exploring-trial-courts-legal-databases-part-2-length-of-legal-documents/”, 2021.

[63] Predictive Justice, in “https://www.predictivejurisprudence.eu/”.

[64] Predictive Justice, Anonymization model, in “https://www.predictivejurisprudence.eu/the_project/anonymization-model/”.

[65] Algorithm Watch, Automating Society Report 2020, Berlin, AlgorithmWatch gGmbH, 2020, 151 p.

[66] Algorithm Watch, Automating Society Report 2020, cit., 151.

[67] Algorithm Watch, Automating Society Report 2020, cit., 151.

[68] Algorithm Watch, Automating Society Report 2020, cit., 151.

[69] Algorithm Watch, Automating Society Report 2020, cit., 151.

[70] Predictive Justice, in “https://www.predictivejurisprudence.eu/”.

[71] N. Smuha, Feedback from: Legal, Ethical & Accountable Digital Society (LEADS) Lab, University of Birmingham, in “https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665480_en”, 2021.

[72] Proposta di regolamento, cit., recital 16.

[73] See D. Amram, High-risk database and the new regulation on AI, SoBigData++ and LeADS joint Awareness Panel, 6.07.2021, ScuolaSantAnna, in “https://www.youtube.com/watch?v=0VDsDBBOkxY&list=WL&index=31”, 2021, 1:53:24 et seq.

[74] See D. Amram, High-risk database and the new regulation on AI, cit.

[75] Proposed AI Act, cit., art. 7(2)(e).

[76] Proposed AI Act, cit., art. 2.

[77] See D. Licari, Predictive Justice. Towards a fully automated data-driven platform for Legal Analytics, cit.

[78] Akoma Ntoso, in “http://www.akomantoso.org/”.

[79] “For example, each sentence is evaluated by 3 experts and we apply a majority vote for the choice of the final label. We avoid the labelling by creating a web application for annotations so the expert can easily choose the right label”. See. D. Licari, Predictive Justice. Towards a fully automated data-driven platform for Legal Analytics, cit.

[80] See D. Licari, Predictive Justice. Towards a fully automated data-driven platform for Legal Analytics, SoBigData++ and LeADS joint Awareness Panel, 6.07.2021, ScuolaSantAnna, 2021, 1:35:36 et seq, https://www.youtube.com/watch?v=0VDsDBBOkxY&list=WL&index=31

[81] F. Doshi-Velez, M. Kortz, Accountability of AI Under the Law: The Role of Explanation. Berkman Klein Center Working Group on AI Interpretability, in Berkman Klein Center for Internet & Society working paper, 2017.

[82] Remember “unconditional counterfactual explanation” refer to the minimum conditions that would have led to a different result. see S. Wachter, B. Mittelstadt, C. Russell, Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, in Harvard Journal of Law & Technology, vol. 31, no. 2, 2018, 841–887.

Lascia un commento