lunedì, Giugno 24, 2024
Uncategorized

AI Act and legal counseling

AI act and legal counseling

a cura di Ana Paula Gonzalez Torres

  1. Artificial intelligence and the legal profession

Nowadays, the legal profession avails itself of “predictive” software “designed for use by legal departments, insurers […], as well as lawyers for them to anticipate the outcome of litigation. Theoretically, they could also assist judges in their decision-making.”[1] The aim of “predictive” software is no longer to find the solution but to model the judicial risk to evaluate it.[2] “we do not predict what will happen, but the different possible scenarios and their probability of occurring.”[3]

In such regards, there is a difference[4] between a) Predicting “is the act of announcing what will happen in advance of future events (by supernatural inspiration, by clairvoyance or premonition),”[5] b) Forecasting “is the result of observing a set of data to envisage a future situation, proposing neither a result nor an interval, but a possibility.” [6]

They propose to establish the probabilities of success, or failure, of a case before a court using statistical modeling of previous decisions and constructing models based on past decisions.[7] Such statistical modeling involves employing natural language processing and machine learning methods, not to reproduce legal reasoning[8] but to identify the correlations between the different parameters of a decision.[9] The goal is to infer one or more deductive or inductive models, then use them to foresee a future judicial decision.[10]

Natural language processing involves analyzing large amounts of natural language data to understand the contents of documents, including the contextual nuances of the language within them.[11] This method allows one to accurately extract information and insights contained in the documents, as well as categorize and organize documents.[12] Recently, systems based on machine learning have produced incredible advancements by extracting complex patterns and learning from large volumes of data.[13] Machine learning by itself produces models by automatically searching for correlation results,[14] leading to categorisations according to the parameters identified by the developer or those discovered by the machine.[15] In such a fashion a predictive software can provide a graphic representation of the probability of success for each outcome of a dispute based on criteria entered by the user (specific to each type of dispute). [16]

In any case, independent of the method used to develop a system, when it comes to the legal field, one of the main difficulties is that legal data, the main characteristics of a dispute, and its contextual elements cannot be considered on the same level. Meanwhile, for big data, law and jurisprudence are simple facts at the same level as the details of a file or the name of a judge.[17]

  1. Predictive models

Predictive models are driving the advancements in legal counsel and the decision-making process. [18] There are two main approaches to predictive modeling are:[19]

  1. a) Inductive models: they produce a forecast by identifying patterns within large raw datasets without explicit predetermined rules. [20] It involves analyzing input data according to the desired result, learning to identify patterns, and then generating computational heuristics that can be used to interpret new data. [21] In essence, they generalize what has been observed to what has not yet been observed by extracting, from the provided data, patterns of interpretation, of reality, in order to apply them in the future.[22]

In the legal context, the inductive model builds links between different lexical groups within a judicial decision. The input groups (facts and reasoning) are correlated to output groups (holdings), aiming to provide a classification framework for future decisions.[23]

The problem with current big data processing systems is that they produce statistics on enormous data sets [24] without any real guarantee of excluding false correlations.[25] Thus, the models are exposed to the flaws of the inductive method,[26] like the impossibility of generating correct evaluation from a generalization reasoning based on singular observations.[27] Another weakness is the extreme sensitivity of the inductive method to the content and characteristics of the observations. Thus, material or technical errors during data collection can condition all of the results of the inferential process.[28]

  1. b) Deductive models: the result comes from applying general rules to specific and concrete cases.[29] It employs formal logic to represent, simultaneously, the provisions in legal regulations, the specific facts of the case, and algorithms for inferential reasoning.[30] The advantage of a deductive model is that it can calculate the possible contents of a judicial decision while offering the advantages of a deterministic algorithm.[31] Thus, when given the same premises, the reasoning inevitably leads to the same result. [32] Note that the reasoning “if… then…” is similar to legal reasoning. The difference is that, in deductive models, the results are based on probabilities.[33]

The problem with the deductive approach in the legal context is that legal rules are characterized by semantic and syntactic ambiguity, which is an obstacle to straightforward logical representations.[34] Such problem undermines the reliability of deductive model forecasts.[35] Furthermore, different legal rules can be applied to even the simplest of concrete cases because legal rules are susceptible to different interpretations.[36] Legal rules also suffer from the limits of legislative technique, plethoric language, contradictions, and a hierarchical structure.[37] While, in scientific settings, it is common to have a detailed, well-defined specification of the behavior of a system in all types of situations, in the field of law, it is normal, and even encouraged, for rules to be left open to interpretation because solving a dispute requires interpreting the general rule in view of the details of the specific case. [38] In such a scenario, the ability to generate reliable predictions when the valid answer can be an n number, superior to one, is evidently limited.[39]

One must differentiate between commercial discourse and the reality of using and deploying these technologies.[40] Currently, AI systems present great potential to help in tasks like control of documents, legal research, drafting of documents, and predicting results of judicial proceedings. [41]  Nonetheless, the risk is that the algorithm language becomes a hegemonic language that asserts itself as predictive justice and legal calculability.[42] Thus, we will reflect on the legal profession’s future, considering the regulatory perspective.

  1. Artificial intelligence and legal counseling

From a lawyer’s perspective, AI systems-based technology seems to represent an evolution to the activity of providing legal advice. This evolution of legal counseling would be based on an empirical and systematical assessment of the probabilities of a judicial proceeding[43] or transaction’s success, even avoiding a lengthy and costly trial. [44] Following such a perspective, there is great interest in employing AI systems like Lex Machina[45] to develop judicial strategies, or Docracy,[46] Luminance,[47] and Predictice[48] to draw up contracts and court documents.[49] The result is augmented reality, Legaltechs, which make the entire case law accessible and operate at a much finer level of granularity by providing the details of individual judges or parties.[50] The problem is that this augmented reality could further accentuate the distortion of competition and inequality of arms between law firms that use “predictive” software and those that do not.[51]

A classification of the use of AI systems in the legal profession can be made according to the service offered:[52]

  • Case-law search engines
  • Online dispute resolution
  • Assistance in drafting legal documents
  • Analysis (predictive, scales)
  • Categorisation of contracts and detection of divergent or incompatible contract clauses
  • “Chatbots” to inform litigants or support them in their legal proceedings

Would the proposed regulation, AI Act, apply to such AI systems employed in legal counseling? The proposed regulation establishes in point 8 of Annex III as high-risk those AI systems in the area of:

“8. Administration of justice and democratic processes:

(a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”

In the first place, if we consider legal counseling by lawyers as contributing to the administration of justice in the measure that legal advice guides the exercise of fundamental rights before a court, then the use of an AI system for drawing up judicial strategies would make that AI system fall under the area of point 8 “administration of justice.”

In the second place, we should ask if the AI system is “intended to assist a judicial authority”? As long as the AI system is intended to assist private actors like lawyers, law firms, companies’ legal departments for legal research, assistance in drafting legal documents, categorization of contracts, and communication tool between lawyers-clients, the proposed regulation does not seem to apply. Thus, providers of AI systems for legal counseling will not be subject to any of the requirements and obligations of the proposed regulation like risk management, even if one could argue that such an AI system could hinder the right of access to legal counseling. For example, imagine a client not being able to obtain legal counseling because an AI system determines theirs is a lost cause right from the start. Thus, AI systems used in legal counseling would only be subject to adopting, voluntarily, codes of conduct that the providers of AI systems draft.[53]

It is unclear whether or not point 8 will be extended to cover AI systems assisting private actors, like law firms or practicing lawyers, in offering legal counseling. So far, the concern has been marginal, almost disregarded, by the comments submitted during the feedback period, nor does it seem like there is a public concern with their use in legal counseling as a high-risk application. Nonetheless, because there is still time for further development of the wording of the proposed regulation, law firms and practicing lawyers should take measures to manage risks associated with the use of AI systems as legal counseling is closely related to fundamental rights and, as such, added to the category of high-risk AI systems.

Lastly, in the legal sector, if law firms are using chat bots for welcoming customers via the website or scheduling appointments with human-lawyers, then they are subject to the proposed AI Act transparency obligations so that individuals are informed that they are interacting with AI systems and not human-lawyers.[54]

  1. Conclusion

As we have seen, the discovery of complexity has undermined the confidence of the hard sciences themselves to be able to gain certainty: predicting is no longer anticipating a future that must necessarily happen, but identifying the space of alternatives compatible with the state and the becoming of phenomena.[55] While we thought that the whole process was aimed at a final outcome, AI systems provide, if not the solution, at least a very precise idea of the outcome, even before starting a case.[56]

In such terms, we should look with open minds to augmented justice, in which the machine constitutes a support tool to orient legal professionals in the labyrinth that is a multilayered set of norms, and on the other hand guarantees the fullest, most effective and best-motivated protection of rights.[57] Artificial intelligence could strengthen the opening of the legal system to the new and constitute the alter ego of a fully hermeneutic aware legal-interpreter. AI systems offer the tools to be fully aware of the past jurisprudence and fully motivate a discontinuity with respect to the dominant past judicial decisions as related to the scenarios presented by the AI system.[58]

Finally, “the rule of law evolves unpredictably over time, but this does not mean that it retains its prescriptive power: law has a history … and will continue to have one, the digital version is only the most recent internal transformation of its normativity.”[59] As the legal system changes, the legal profession should be wary while defending that the final goal is to defend people’s rights in compliance with current legislation but with an eye open to future legislation.

 

[1] European Commission for the Efficiency of Justice (CEPEJ), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment, Appendix I, In-depth study on the use of AI in judicial systems, notably AI applications processing judicial decisions and data, cit., 30; “Loin d’être nouvelle, l’idée même de justice prédictive était, ainsi que le rappelle le professeur Bruno Dondero, déjà en germe dans les travaux du mathématicien Siméon-Denis Poisson publiés en 1837 et portant sur la probabilité des jugements.” see J. Sauvé, La justice prédictive, Cour de Cassation, 2018, https://www.courdecassation.fr/publications_26/prises_parole_2039/discours_2202/marin_procureur_7116/justice_predictive_38599.html.

[2] A. Garapon and M. R. Ferrarese, La giustizia digitale: determinismo tecnologico e libertà, Bologna, 2021.

[3] A. Véhel, interview with A. Dumourier, in Le Monde du Droit, 2016.

[4] “The confusing in the common use of “prediction,” as opposed to forecast, in regards to “predictive justice” seems to be explained by a transfer of the term from the “hard” sciences, where it refers to a variety of data science techniques derived from mathematics, statistics and game theory that analyse present and past facts to make hypotheses about the content of future events.” see supra note 1, at 30.

[5] Supra note 1, at 30.

[6] Supra note 1, at 30.

[7] “Predictive” software’s logic is based on either Bayesian estimation or discriminative methods which try to estimate the current or future range of values of a variable (e.g., the outcome of a trial) from the analysis of past examples. see supra note 1, at 31; A. Punzi, Difettività e giustizia aumentata. L’esperienza giuridica e la sfida dell’umanesimo digitale, in Ars interpretandi, 2021, fasc. 1, 124.

[8] “For example, today’s online translators do not carry out abstract reasoning. They infer a probable estimate for the best match between groups of lexical structures and translations already done.” see supra note 1, at 33.

[9] Parameters of a decision could be for example, in a divorce claim “the length of marriage, the income of spouses, the existence of adultery, the amount of the benefit pronounced, etc.” see supra note 1, at 29.

[10] Supra note 1, at 29-30; P. Comoglio, Nuove tecnologie e disponibilità della prova. L’accertamento del fatto nella diffusione delle conoscenze, Torino, 2018, 333; A. Condello, Il non-dato e il dato. Riflessioni su uno scarto fra esperienza giuridica e intelligenza artificiale, in Ars interpretandi, 2021, fasc. 1, 109.

[10] Supra note 1, at 29.

[11] Natural Language Processing involves “indicizzazione puntuale dei testi reperibili e dall’accostamento verbale tipico del contesto esaminato, anche con espressioni sintomatiche legate all’uso di altri termini lessicali (acquisizione, perdita o presa per esempio con riferimento all’uso del termine possesso possono riferirsi a contesti differentemente normati e differentemente presenti).” see G. Corasaniti, Intelligenza artificiale e diritto: il nuovo ruolo del giurista, in U. Ruffolo, Intelligenza artificiale. Il diritto, i diritti, l’etica, Giuffrè Francis Lefebvre, Milano, 2020, 399-400.

[12] P. Comoglio, supra note 10, at 354.

[13] “The idea nowadays is no longer to write reasoning rules as was done for expert systems[13] (which relied on processing rules written by a computer scientist, they were able to answer specialised questions and reason using known facts, executing predefined encoding rules in an engine) but to let machine learning systems themselves identify existing statistical models in the data and match them to specific results. see supra note 1, at 35.

[14] Supra note 1, at 33.

[15] Supra note 1, at 35.

[16] Some argue ““L’IA non “predicono” alcunché ma ciò che l’IA fanno è “stimare,” può stimare con che probabilità un evento si realizzerà.” As such, “permettere l’applicazione di IA ristrette/deboli che abbiano il compito non di decidere, ma di stimare, magari accedendo a dati sulla realtà che per la loro complessità rimarrebbero fuori dalla decisione, quantità o probabilità utili a supportare prima una ragionata negoziazione extra giudiziale e poi, vano quel tentativo, l’eventuale soluzione giudiziale della disputa.” see R. Rovatti, Il processo di apprendimento algoritmico e le applicazioni nel settore legale, in U. Ruffolo, XXVI Lezioni di diritto dell’intelligenza artificiale, Giappichelli, Torino, 2021; “The intellectual activity consists mainly of various kinds of research” see A. Turing, Intelligent machinery, London, 1948, 3-23, in B. Meltzer, Machine Intelligence, Edinburgh, 1969.

[17] Paris Innovation Review, Predictive justice: when algorithms pervade the law, 2017, http://parisinnovationreview.com/articles-en/predictive-justice-when-algorithms-pervade-the-law.

[18] N. Lettieri, Contro la previsione. Tre argomenti per una critica del calcolo predittivo e del suo uso in ambito giuridico, in Ars interpretandi, 2021, fasc. 1, 85.

[19] Be aware, just as inductive and deductive logical reasoning, the predictive models based on such logics can be “affetti da limiti intrinseci in grado di condurre facilmente a rappresentazioni fallaci della realtà e delle sue future evoluzioni.” see N. Letteri, supra note 18, at 86.

[20] P. Comoglio supra note 10, at 357; N. Lettieri, supra note 18, at 87.

[21] “Il riferimento è innanzitutto alle tecniche di machine learning impiegate negli ultimi anni in tutte le situazioni in cui è difficile o impossibile trovare soluzioni mediante da algoritmi che indichino in maniera puntuale tutte le operazioni da compiere sui dati.” see P. Domingos, L’Algoritmo Definitivo: La macchina che impara da sola e il futuro del nostro mondo, Bollati Boringhieri, 2016; P. Comoglio, supra note 10, at 350.

[22] J. Nieva-fenoll, Inteligencia artificial y proceso judicial, Madrid, 2018, 99-100; Lettieri, supra note 18, at 87.

[23] Supra note 1, at 31-32.

[24] In recent years, numerous works have been published that apply inductive inference techniques to large amounts of judicial data to predict the evolution of jurisprudence. see D. M. Katz, M. J. Bommarito, and J. Blackman, A General Approach for Predicting the Behavior of the Supreme Court of the United States, in PLoS One, vol. 12, 2017; For a publication regarding the prediction of judicial decision see N. Aletras, D. Tsarapatsanis, D. Preoţiuc-pietro, and V. Lampos, Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective, in PeerJ Computer Science, 2016.

[25] D. Cardon, A quoi rêvent les algorithmes, nos vies à l’heure des big data, La République des idées, Editions du Seuil, 2015.

[26] P. Comoglio, supra note 10, at 357.

[27] N. Lettieri, supra note 18, at 87.

[28] J. Nieva-fenoll, supra note 22, at 84; N. Lettieri, supra note 18, at 88.

[29] P. Comoglio, supra note 10, at 348; N. Lettieri, supra note 18, at 86.

[30] N. Lettieri, supra note 18, at 86.

[31] For an discussion on the conflicting notions of determinism, between “determinismo tecnologico” and “determinismo sociologico” see P. Comoglio, supra note 10, at 193.

[32] N. Lettieri, supra note 18, at 86.

[33] A Garapon and M. R. Ferrarese, supra note 2.

[34] B. G. Mattarella, La trappola delle leggi: molte, oscure, complicate, Bologna, 2011.

[35] N. Lettieri, supra note 18, at 86.

[36] U. Ruffolo, La Machina Sapiens come “Avvocato Generale” ed il Primato del Giudice Umano: Una Proposta di Interazione Virtuosa, in XXVI Lezioni di diritto dell’intelligenza artificiale, Giappichelli, Torino, 2021.

[37] B. G. Mattarella, supra note 34; Ruffolo, supra note 36.

[38] J. Nieva-fenoll, supra note 22, at 136; J. Kroll, J. Huey, S. Barocas, E. Felten, J. Reidenberg, D. G. Robinson, and H. You, Accountable Algorithms, in University of Pennsylvania Law Review, vol. 165, 2017

[39] N. Lettieri, supra note 18, at 87.

[40] Supra note 1, at 14.

[41] For example Ross Intelligence, https://www.rossintelligence.com/features, for services like analysis of legal documents, the search for specific phrases or terms in the texts of previous decisions, the answer to (simple) questions around legal issues; LawGeex, for rapid analysis of contracts, https://www. lawgeex.com/.

[42] A. Condello, supra note 10, at 109.

[43] In terms of predictive analysis, “knowing how a judgment will be arrived at is an essential element for lawyers in predicting the outcome of a case, and they believe that knowing one’s judge is sometimes almost as important as knowing the law.” see supra, note 1, at 26.

[44] Supra note 1, at 41.

[45] Lex Machina, https://lexmachina.com/.

[46] Docracy, https://www.docracy.com/.

[47] Luminance, https://www.luminance.com/.

[48] Predictice, https://predictice.com/.

[49] For a extended analysis see R. Susskind, The End of Lawyers?: Rethinking the Nature of Legal Services, Oxford, 2009; R. Susskind, Tomorrow’s Lawyers: An Introduction to Your Future, Oxford, 2013; A. Garapon and M. R. Ferrarese, supra note 2.

[50] J. Nieva-fenoll, supra note 22, at 143; A. Garapon and M. R. Ferrarese, supra note 2; There has always been the desire “to make comparisons between panels of judges, more or less empirically, so as to give better advice to clients dealing with a particular judge or panel of judges” see supra note 1, at 26.

[51] Supra note 1, at 26-27.

[52] “Other activities carried out by legal tech companies have not been included in this classification because they involve little or no artificial intelligence processing: some sites offer access to legal information, “cloud” solutions, electronic signatures, etc.” see supra note 1, at 17.

[53] Proposed AI Act, art. 69.

[54] Proposed AI Act, art. 52(1).

[55] A. Punzi, supra note 7, at 124.

[56] A. Garapon and M. R. Ferrarese, supra note 2

[57] A. Punzi, supra note 7, at 125.

[58] In the same line of argumentation see Declaration on Artificial Intelligence, Robotics and Autonomous Systems prepared by the European Group on Ethics in Science and New Technologies (EGE) of 12 March 2018 or the White Paper on Artificial Intelligence – A European approach to excellence and trust published by the European Commission on 19 February 2020.

[59] A. Garapon and M. R. Ferrarese, supra note 2.

Lascia un commento