Avaliação algorítmica discriminatória em processo penal: como o COMPAS está a erodir as bases fundacionais do direito a uma decisão humana em processo penal
DOI:
https://doi.org/10.22197/rbdpp.v12i1.1287Palavras-chave:
Inteligência Artificial; COMPAS; avaliação algorítmica do risco enviesada; justiça algorítmica; processo penal; algoritmos; direito a uma decisão judicial emitida por um juiz; predição paritária; robôs-juízesResumo
A inteligência artificial representa um caminho inteiramente novo, uma incursão fulgurante em territórios nunca antes mapeados, o que, em si mesmo tomado, encerra uma pletora de candentes desafios que não deverá deixar ninguém indiferente. No que tange a tecnologia de ponta, não se deve confiar plenamente na sorte – sequer no bambúrrio. A inteligência artificial, apesar das suas múltiplas vantagens, não está isenta de uma miríade de preocupantes inconsistências e de inultrapassáveis insuficiências. De entre as quais se respiga a discriminação algorítmica que resulta da utilização de algoritmos preditivos discriminatórios no âmbito do processo penal, o que poderá derruir as bases fundacionais em que se estriba a justiça algorítmica. Pode afirmar-se, sem surpresa de monta, que a Inteligência Artificial avoca, nas suas mãos tremeluzentes, o destino da humanidade, atento o potencial disruptivo que encerra. Está fundamentalmente em causa a emergência de novas – e profundamente disruptivas – formas de intrusão nas nossas vidas quotidianas que são, em si mesmo tomadas, insuscetíveis de ser acurada e proficientemente antecipadas no momento atual que nos interpela. É com este pano de fundo que florescem as candentes questões: deverão os operadores judiciários (i.e., juízes, procuradores e advogados) confiar plenamente na avaliação algorítmica do risco em processo penal? Por outro lado, deveremos confiar à Inteligência Artificial o múnus de realizar decisões que podem afectar seriamente o núcleo essencial dos direitos fundamentais dos cidadãos? Tendo em linha de conta as aporias acima assinaladas, este texto conclui que à inteligência artificial não deverá atribuído o poder de decidir questões de supina importância em processo penal (i.e. o direito a um processo justo e equitativo, um esteio essencial do processo penal moderno), sob pena de consequências danosas irreversíveis (e.g. a perda de reputação funcional do sistema de administração de justiça penal).
Downloads
Referências
Adadi, Amina/Berrada, Mohammed, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), Institute of Electrical and Electronics Engineers, Volume 6, 2018, passim.
Agrawal, Ajay/Gans, Joshua/Goldfarb, Avi, Prediction Machines: The Simple Economics of Artificial Intelligence, 2nd Edition. Massachusetts: Harvard Business Review Press, 2022.
Alimardani, Maryam/Hiraki, Kazuo, “Passive Brain-Computer Interfaces for Enhanced Human-Robot Interaction”, Frontiers of Robotics and Artificial Intelligence, Volume 7, 2020, pp. 1 and ff and passim.
Alves, Jones Figueirêdo/Pimentel, Alexandre Freire, “Breves notas sobre os preconceitos decisionais judiciais produzidos por redes neurais artificiais (Brief notes about the judicial decisional prejudices produced by artificial neural networks), Lisbon Law Review, Thematic Issue: Law and Technology, Year LXII, Numbers 1 and 2, 2022, pp. 555-577.
Ashley, Kevin D., Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press, 2018.
Avbelj, Matej, “The Rule of Law, Comprehensive Doctrines, Overlapping Consensus, and the Future of Europe”, Ratio Juris, Volume 36, Issue 3, September, 2023, pp. 242–258.
Avery, Mallory/Leibbrandt, Andreas/Vecci, Joseph, “Does Artificial Intelligence Help or Hurt Gender Diversity? Evidence from Two Field Experiments on Recruitment on Tech, Monash University, Monash Business School, 2023, pp. 1-70.
Baller, Stéphane/Deffains, Bruno, “Intelligence artificielle et devenir de la professions d´avocat ; l´avenir est présent », Revue pratique de la prospective et d’innovation, Volume 1, 2018, pp. 14 and ff.
Bar-Gill, Oren/Sunstein, Cass R./Talgam-Cohen, Inbal, “Algorithmic Harm in Consumer Markets”, Journal of Legal Analysis, Volume 15, n. º 1, 2023, pp. 1-47.
Bathae, Yavar, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law and Technology, n.º 37, 2018, p. 901.
Beriain, Iñigo de/Estrada/ Pérez, M. Josune, “La inteligencia artificial en el proceso penal español: un análisis de su admisibilidad sobre la base de los derechos fundamentales implicados”, Revista de derecho UNED (RDUNED), Volume 25, 2019, pp. 531-561.
Bibas, Stefanos, “Foreword. Prosecutors' Changing Roles at the Hub of Criminal Justice”, passim, The Oxford Handbook of Prosecutors and Prosecution, Wright, Ronald F./ Levine, Kay L./Gold, Russell M. (Editors), Oxford: Oxford University Press, 2021, pp. 1-654.
Blattner, Laura/Nelson, Scott/Spiess, Jann, Unpacking the Black Box: Regulating Algorithmic Decisions. Working Paper. Redwood City, California: Stanford University Press, 2021.
Boden, Margaret A., Artificial Intelligence: A Very Short Introduction. Oxford: Oxford University Press, 2018.
Borgesius, FJ Zuiderveen, “Strengthening legal protection against discrimination by algorithms and artificial intelligence”, International Journal of Human Rights, Volume 24, Issue 10, 2020, pp. 1573 and ff and passim.
Bornet, Pascal/Barkin, Ian/Wirtz, Jochen, Intelligent Automation: Learn how to harness Artificial Intelligence to boost business and make out world more human, 2020.
Bornstein, Stephane, “Antidiscriminatory Algorithms”, Alabama Law Review, Volume 70, 2018, pp. 519-570.
Brennan-Marquez/Henderson, Stephen E., “Artificial Intelligence and Role-Reversible Judgment”, Journal of Criminal Law and Criminology, Volume 109, 2019, pp. 137-163.
Brigant, Jean-Marie, “Les risques accentués d´une justice pénale prédictive », Archives de Philosophie du Droit, Volume 60, 2018, pp. 238 and ff and passim.
Brownsword, Roger, Law, Technology and Society: Re-Imagining the Regulatory Environment. London: Routledge, 2020.
Burk, Dan L., “Algorithmic Fair Use”, University of Chicago Law Review, Volume 86, 2019, pp. 283-288.
Calo, Ryan, “Artificial Intelligence Policy: Primer and Roadmap”, U.C. Davis Review, Volume 51, 2018, pp. 309-404.
Calo, Ryan, “Robotics and the Lessons of Cyberlaw”, California Law Review, Volume 103, 2015, pp. 512 ff.
Cataleta, Maria Stefania, “Artificial Intelligence vs Human Intelligence”, in: Martin, L. Miraut/Zalucki, M. (editors), Artificial intelligence and Human Rights, Dickinson eBook. Krakow: AFM Krakow University, 2021, pp. 117-127.
Cataleta, Maria Stefania/Cataleta, Anna, “Artificial Intelligence and Human Rights: An Unequal Struggle”, CIFILE Journal of International Law, Volume 1, N. º 2, 2020.
Chabert, J.L. et al, A History of Algorithms: From the Pebble to the Microchip. New York/Heidelberg: Springer, 2013.
Chandler, Anupam, “The Racist Algorithm”, Michigan Law Review, Volume 115, n. º 6, 2018, p. 1026.
Chasemi et al, Medhi, “The application of Machine Learning to a General Risk-Need Assessment Instrument in the Prediction of Criminal Recidivism”, Criminal Justice and Behavior, Volume 48, 2020, pp. 518-538.
Citron, Danielle Keats, “Technological Due Process”, Washington University Law Review, Volume 85, 2008, pp. 1256-1257.
Corbett-Davies, Sam/Goel, Sharad, “The Measure and Mismeasure of Fairness: a critical Review of Fair Machine Learning”, Cornell University Library, 2018, pp. 3 and ff and passim.
Crootof, Rebecca, “Cyborg Justice” and the Risk of Technological-Legal Lock-In”, Columbia Law Review, Volume 119, 2019, pp. 233 and ff.
Davis, Joshua P. “Law Without Mind: AI, Ethics, and Jurisprudence”, California Western Law Review, Volume 55, 2018, pp. 181 e ss and passim.
Davis, Joshua P., “Of Robolawyers and robojudges”, Hastings Law Journal, Volume 73, Issue 5, 2022, pp. 1176-1201.
Davis, Joshua P., Unnatural Law: AI, Consciousness, Ethics, and Legal Theory. Cambridge: Cambridge University Press, 2023.
Dazeley, Richard/Vamplew, Peter/Foale, Cameron/ Young, Charlotte/ Aryal, Sunil/Cruz, Francisco, “Levels of explainable artificial intelligence for human-aligned conversational explanations”, Artificial Intelligence, Volume 299, 2021, passim.
Deffains, Bruno, “Le monde du droit face à la transformation numérique”, Revue Française d´Études Constitutionnelles et Politiques, Volume 170, 2019, pp. 49 and ff and passim.
Dino, Dylan, “The Rule of Law and the Rule of Empire: A. V. Dicey in Imperial Context”, The Modern Law Review, Volume 81, Issue 5, 2018, pp. 739-764 (739-741).
Dixon, H. B., “The Evolution of a High Technology Courtroom”, Future Trends in State Courts. Virginia: National Center for State Courts, 2011.
Dockrill, Peter, “Brain Implant Translates Paralyzed Man´s Thoughts into Text With 94 % Accuracy”, SCI Alert, 2021, passim.
Dong et al, Qi,, “Imbalanced deep learning by minority class incremental rectification, IEEE Transactions on Pattern Analysis & Machine Intelligence, Volume 41, 2019, p. 1367.
Donoghue, Jane, “The Rise of Digital Justice: Courtroom Technology, Public Participation and Access to Justice”, The Modern Law Review, Volume 80, Issue 6, 2018, p. 995.
Edwards, Lilian/Veale, Michael, “Slave to the Algorithm? Why a “Right to an Explanation” is Probably not the Remedy You are Looking for”, Duke Law and Technology Review, Volume 16, 2018, pp. 18-67.
Engelhart, Marc, Sanktionierung von Unternehmen und Compliance – Eine rechtsvergleichende Analyse des Straf-und Ordnungswidrigkeitenrechts in Deutschland und den USA, 2. Auflage, Schriftenreihe des Max-Planck-Instituts für ausländisches und internationales Strafrecht, Reihe S: Strafrechtliche Forschungsberichte (MPIS), Band 121, Berlin: Duncker & Humblot, 2012, pp. 284-290.
Eubanks, Virginia, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. New York: St. Martin's Press, 2018.
Ferey, Samuel, « Analyse économique du droit, big data et justice prédictive », Archives de Philosophie du Droit, Volume 60, 2018, pp. 68-81.
Griemmelmann, James/Westreich, Daniel, “Incomprehensible Discrimination “, California Law Review Online, Volume 7, 2018, pp. 164-171.
Grimm, Paul W., “Practical Considerations for the Admissibility of Artificial Intelligence Evidence”, Maryland Bar Journal, Volume 2, Issue 3, 2021, pp. 39 and ff and passim.
Hamilton, Melissa, “The Biased Algorithm: Evidence of Disparate Impact on Hispanics”, American Criminal Law Review, Volume 56, 2019, pp. 1553 and ff and passim.
Hanke, Philip, “Computers with Law Degrees? The Role of Artificial Intelligence in Transnational Dispute Resolution, and Its Implications of the Legal Profession”, Transnational Dispute Management, Volume 14, Issue 2, 2018, passim.
Huq, Aziz Z., “Racial Equity in Algorithmic Criminal Justice”, Duke Law Journal, Volume 68, 2019, pp. 1043-1072.
Ifeoma, Ajunwa, “The Paradox of Automation as Anti-Bias Intervention”, Cardozo Law Review, Volume 41, 2020, pp. 1672, 1726-1727.
Irti, Natalino, Un Diritto Incalcolabile. Torino: G. Giappichelli Editore, 2016, pp. 20 and ff and passim.
Jackson, Maya C., “Artificial Intelligence and Algorithmic Bias: The Issues with Technology Reflecting History and Humans”, Journal of Business Technology Law, Volume 16, 2021, pp. 299-316.
Katyal, Sonia, “Private Accountability in the Age of Artificial Intelligence”, UCLA Law Review, Volume 66, 2019, pp. 54-99.
Kleinberg, Jon, et al, “Discrimination in the Age of Algorithms”, Journal of Legal Analysis, 2018, pp. 114-174.
Kleinberg, Jon, et al, “Algorithms as Discrimination Detectors”, Proceeds of the National Academy of Science, Volume 117, 2020, pp. 30096-30110.
KÖCK, Elisabeth, “Nemo-tenetur-Grundsatz für Verbände”, Festschrift für Manfred Burgstaller zum 65. Geburtstag, Graft, Christian/Medigovic, Ursula, (Hrsg.). Wien/Graz: Neuer Wissenschaftlicher Verlag, 2004, pp. 267-279.
Mayson, Sandra G., “Bias In, Bias Out”, Yale Law Journal, Volume 128, 2019, pp. 2218 and ff and passim.
O’Neill, C., Weapons of Math Destruction. London: Penguin Random House, 2016.
Okidegbe, Ngozi, “The Democratizing Potential of Algorithms?”, Connecticut Law Review, Volume 54, 2021, passim.
Osaba, Osonde/Welser, William IV, An Intelligence in Our Image – The Risk of Bias and Errors in Artificial Intelligence, Santa Monica, California, Rand, 2018.
Pagallo, Ugo/Quattrocolo, Serena, “The Impact of AI on Criminal Law and its twofold procedures”, Barfield, W./Pagallo, Ugo (Editors), Research Handbook on the Law of Artificial Intelligence, 2018. Cheltenham: Edward Elgar, 2018, pp. 385-409.
Pasquale, Frank, “A Rule of Persons, Not Machines: The Limits of Legal Automation”, George Washington Law Review, Volume 87, 2019, pp. 1 and ff and passim.
Pasquale, Frank, The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, Massachusetts: Harvard University Press, 2015.
Queck, Nadine, Die Geltung des Nemo-tenetur-Grundsatzes Die Geltung des nemo-tenetur-Grundsatzes zugunsten von Unternehmen. Berlin: Duncker and Humblot, 2005.
Samek, Wojciech/Müller, Klaus-Robert, “Explainable AI: interpreting, explaining and visualizing deep learning”, Towards Explainable Artificial Intelligence, Samek, Wojciech/Montavon, Grégoire/ Vedaldi, Andrea/Hansen, Lars Kai/ Müller, Klaus-Robert (Editors). Heidelberg/Berlin/New York: Springer, 2019, pp. 5-22.
Santos, Hugo Luz dos, Inteligência Artificial e Processo Penal. Braga: NovaCausa Edições Jurídicas, 2022.
Santos, Hugo Luz dos Santos, Towards a Four-Tiered Model of Mediation. New York: Springer Nature, 2023.
Santos, Hugo Luz dos/Leong, Cheng Hang, “Culture Matters”: Expedited Arbitration and Arb-Med in Macau”, Hong Kong Law Journal, Volume 54:3, 2024.
Santos, Hugo Luz dos, Multidisciplinary Dynamics of Mediation. New York: Springer Nature, 2025a, Volume I.
Santos, Hugo Luz dos, Multidisciplinary Dynamics of Mediation. New York: Springer Nature, 2025b, Volume II.
Santos, Hugo Luz dos, Controllable Artificial Intelligence and the Future of Law (Artificial Intelligence and the Rule of Law Series). New York: Springer Nature, 2025c.
Sourdin, Tania, Judges, Technology and Artificial Intelligence: The Artificial Judge. Cheltenham: Edward Elgar, 2021.
Surden, Harry, “Machine Learning and Law”, Washington Law Review, Volume 89, 2015, pp. 87 and ff.
Susskind, Richard, Tomorrow’s Lawyers: An Introduction to your Future, 2d edition. Oxford: Oxford University Press, 2018.
Susskind, Richard, “The Future of Courts”, Remote Courts, Volume 6, n.º 5, July/August 2020, 2020, pp. 1 and ff and passim.
Wachter, S/Mittelstadt. B., “A right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review, 2019, pp. 1 and ff and passim.
Watcher, Sandra/Mittelstadt, Brent/Russell, Chris, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law and Technology, Volume 31, 2018, pp. 841 e ss and passim.
Waytowhich, N./Lawhern, VJ/Garcia, JO, et al., “Compact Convolutional neural networks for classification of asynchronous steady-state visual evoked potentials”, Journal of Neural Engineering, Volume 15, Issue 6, 2018, passim.
West, Sarah Myers/Whittaker, Meredith/Crawford, Kate, Discriminating Systems. Gender, Race and Power in AI. New York: AI Now Institute, 2019.
Wexler, Rebecca, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System”, Stanford Law Review, Volume 70, 2018, pp. 1343 and ff.
Williams, Rebecca, “Accountable Algorithms: Adopting the Public Law Toolbox Outside the Realm of Public Law”, Current Legal Problems, Vol. 75, 2022, pp. 237–263.
Williams, Rebecca, “Rethinking Administrative Law for Algorithmic Decision-Making”, Oxford Journal of Legal Studies, Volume 42, 2021, pp. 468 e ff and passim.
Willis, Lauren, “Deception by Design”, Harvard Journal Law and Technology, Volume 34, 2020, pp. 115-190.
Wisser, Leah, “Pandora´s Algorithmic Black Box: The Challenges of Using Algorithmic Risk Assessments in Sentencing”, American Criminal Law Review, Volume 56, 2019, pp. 1811-1832.
Workman, W., “Advancements in technology: New opportunities to investigate factors contributing to differential technology and information use”, International Journal of Management and Decision Making, Volume 39, 2007, pp. 317 ff.
Wu, Tim, “Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems”, Columbia Law Review, Volume 119, 2019, pp. 2001 and ff and passim.
Volokh, Eugene, “Chief Justice Robots”, Duke Law Journal, Volume 68, 2019, pp. 1135 and ff and passim.
Xiang, Alice, “Reconciling Legal and Technical Approaches to Algorithmic Bias”, Tennessee Law Review, Volume 88, 2021, passim.
Yang, C./Hang, X./Wang, Y., et al, “A dynamic window recognition algorithm for SSVEP-based brain-computer interfaces using spatial temporal equalizer”, International Journal of Neural Systems, Volume 28, Issue 10, 2018, passim.
Yang/Kai Hao, “Selling Consumer Data for Profit: Optimal Market-Segmentation Design and its Consequences”, American Economic Review, Volume 112, Issue 4, April 2022, 2022, pp. 1364-1393.
Yu, Peter K, “Can Algorithms Promote Fair Use?”, Fiu Law Review, Volume 14, 2020, pp. 328 and ff and passim.
Yu, Peter K., “The Algorithmic Divide and Equality in the Age of Artificial Intelligence”, Florida Law Review, Volume 72, 2020, pp. 355-360.
Zhang, X./Sheng, QZ et al., “Converting your thoughts to texts: enabling brain typing via deep feature learning of eeg signals”, IEEE International Conference on Pervasive Computing and Communications. Athens: IEEE, 2018, passim.
Zhu, Mirilla, “Jury, Using Artificial Intelligence to Predict Recidivism Rates”, Yale Scientific, 2020, passim.
Kozlov, Yuri/Shutova, Maria/Bajwa, Taaha, Automated Judge is Not a Task For LegalTech But For DeepTech, 24th February of 2025, 2025, p. 1.
Zou, Mimi/Leffley, Ellen, “Generative Artificial Intelligence and Article 6 of the European Convention on Human Rights: The Right to a Human Judge?”, Mimi Zou, Martin Ebers, Cristina Poncibò and Ryan Calo (Editors), The Cambridge Handbook of Generative AI and the Law, Cambridge, Cambridge University Press 2025, passim.
Zuboff, Zhoshana, The Age of Surveillance Capitalism, The Fight for a Human Future at the New Frontier of Power. New Yor City: PublicAffairs Books, 2019.
Downloads
Publicado
Declaração de Disponibilidade de Dados
In compliance with open science policies, all data generated or analyzed during this study are included in this published article.
Edição
Seção
Licença
Copyright (c) 2026 Hugo Luz dos Santos

Este trabalho está licenciado sob uma licença Creative Commons Attribution 4.0 International License.
Os direitos autorais dos artigos publicados são do autor, com direitos do periódico sobre a primeira publicação, impressa e/ou digital.
Os autores somente poderão utilizar os mesmos resultados em outras publicações indicando claramente este periódico como o meio da publicação original. Se não houver tal indicação, considerar-se-á situação de auto-plágio.
- Portanto, a reprodução, total ou parcial, dos artigos aqui publicados fica sujeita à expressa menção da procedência de sua publicação neste periódico, citando-se o volume e o número dessa publicação, além do link DOI para referência cruzada. Para efeitos legais, deve ser consignada a fonte de publicação original.
Por se tratar de periódico de acesso aberto, permite-se o uso gratuito dos artigos em aplicações educacionais e científicas desde que citada a fonte, conforme a licença da Creative Commons.
![]()
A partir de 2022, os artigos publicados na RDPP estão licenciados com uma Licença Creative Commons Atribuição 4.0 Internacional. Os artigos puliicados até 2021 adotaram a Licença Creative Commons Atribuição-NãoComercial 4.0 Internacional.
---------------
Arquivamento e distribuição
Permite-se sem restrições o arquivamento do PDF final publicado, em qualquer servidor de acesso aberto, indexador, repositório ou site pessoal, como Academia.edu e ResearchGate.









