Prince Tritto, Philippe
Preferred name
Prince Tritto, Philippe
Official Name
Prince Tritto, Philippe
Alternative Name
pprince
Main Affiliation
ORCID
0000-0002-8934-5915
Scopus Author ID
58704295400
4 results
Now showing 1 - 4 of 4
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Causal Artificial Intelligence in Legal Language Processing: A Systematic Review(MDPI, 2025); Recent advances in legal language processing have highlighted limitations in correlation-based artificial intelligence approaches, prompting exploration of Causal Artificial Intelligence (AI) techniques for improved legal reasoning. This systematic review examines the challenges, limitations, and potential impact of Causal AI in legal language processing compared to traditional correlation-based methods. Following the Joanna Briggs Institute methodology, we analyzed 47 papers from 2017 to 2024 across academic databases, private sector publications, and policy documents, evaluating their contributions through a rigorous scoring framework assessing Causal AI implementation, legal relevance, interpretation capabilities, and methodological quality. Our findings reveal that while Causal AI frameworks demonstrate superior capability in capturing legal reasoning compared to correlation-based methods, significant challenges remain in handling legal uncertainty, computational scalability, and potential algorithmic bias. The scarcity of comprehensive real-world implementations and overemphasis on transformer architectures without causal reasoning capabilities represent critical gaps in current research. Future development requires balanced integration of AI innovation with law’s narrative functions, particularly focusing on scalable architectures for maintaining causal coherence while preserving interpretability in legal analysis. ©The authors ©Entropy ©MDPI. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Exploring the Challenges and Limitations of Unsupervised Machine Learning Approaches in Legal Concepts Discovery(Springer, 2024-01-01); The utilization of machine learning methods for the analysis and interpretation of legal documents has been growing over the years, yet their potential and limitations remain under-explored. This study aims to address this gap, using unsupervised machine learning techniques to discover legal concepts from a corpus of Spanish legal documents. In addition to striving for optimal results, our research also embarks on an exploration of the challenges and limitations of unsupervised machine learning, investigating its capabilities and limitations in legal text analysis. We demonstrate that even relatively simplistic methodologies can yield noteworthy insights, with the highest identification rate of 70% achieved by Topic Modeling with Latent Dirichlet Allocation (LDA). However, challenges were encountered with the identification of some concepts, suggesting potential improvements in the corpus preprocessing and tokenization stages or the techniques to be used. The findings underscore the potential of unsupervised learning algorithms in legal text analysis, offering an intriguing path for future research. While acknowledging the need for higher accuracy in practical applications, this study emphasizes the remarkable feat achieved and proposes a way forward for a hybrid or adaptable approach.64 2 - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Assessing AI-Generated Legal Reasoning: A Benchmark for Legal Text Quality from Literature Review(Springer Nature Switzerland, 2025); The adoption of Large Language Models in law has sparked debate over how best to evaluate AI-generated legal reasoning. Existing benchmarks focus on surface-level accuracy, overlooking deeper dimensions such as argumentative coherence, practical usability, and alignment with jurisprudential values. This paper provides a comprehensive framework that integrates insights from formalism, interpretivism, realism, and argumentation theory to assess legal AI outputs. We first explore the philosophical foundations of legal reasoning, drawing on MacCormick’s concepts of internal and external justification and Perelman’s notions of audience-centered persuasion to highlight the rhetorical and moral dimensions essential for evaluation. Next, we examine structured approaches to evaluation from related fields before showing why existing benchmarks (e.g., LexGLUE, LegalBench, LegalAgentBench) only partially capture the subtleties of legal reasoning. We also contrast common law and civil law traditions to illustrate how a one-size-fits-all approach neglects the distinct roles of precedent versus codified statutes. Building on these theoretical and comparative insights, we propose a three-stage evaluation methodology that begins with automated screening for factual consistency, proceeds to expert-led rubric assessment across five dimensions (Accuracy, Reasoning, Clarity, Usefulness, and Safety), and concludes with iterative refinement through reliability checks. This structured approach, validated through a pilot study, aims to strike a balance between scalability and nuance, equipping researchers and practitioners with a robust tool for assessing AI-generated legal texts. Unifying theoretical rigor, domain-specific practicality, and cross-jurisdictional adaptability, this framework lays a solid foundation for legal AI benchmarks and paves the way for safer, more transparent deployment of AI in law. ©The authors ©Springer. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Jurists of the Gaps: Large Language Models and the Quiet Erosion of Legal Authority(Masaryk University Press, 2025-10-01); Ilsse Carolina Torres Ortega<jats:p>Large Language Models (LLMs) are not merely tools to assist legal professionals—they represent a deeper epistemic and normative challenge to the foundations of legal authority. While LLMs allow humans to produce outputs that convincingly simulate legal reasoning, they lack the embodied judgment, ethical intentionality, and contextual awareness that define legitimate legal decision-making. This paper argues that the social legitimacy of the legal profession relies on capacities that are not reproducible through computational systems. We first examine the epistemological limitations of LLMs, drawing on Kantian philosophy and complexity theory to show that their outputs are simulations, not acts of understanding. We then analyze how this technological shift risks reducing legal professionals to jurists of the gaps – filling in only where machines fall short – thereby hollowing out the humanistic mission of law. Against this backdrop, we call for a renewed professional ethic centered on interpretation, creativity, and normative judgment, rather than technical supplementation. The automation of law is not the end of the profession, but it could be the end of its authority – unless its practitioners reclaim what cannot be outsourced.</jats:p>
