Psychological Implications of Artificial Intelligence in Judicial Decision-Making and Criminal Sentencing

Authors

  • Elvira Čekić University of Sarajevo

DOI:

https://doi.org/10.47604/ijp.3659

Keywords:

Artificial Intelligence, Judicial Decision-Making, Criminal Sentencing, Psychological Effects, Perceived Bias, Trust in Justice, Fairness Perception

Abstract

Purpose: This study examines the psychological implications of integrating artificial intelligence (AI) into judicial decision-making in criminal justice, including algorithmically supported risk assessment and sentencing decisions. It analyzes how AI-based decision-support systems influence perceptions of fairness, trust in judicial decisions, and decision confidence, as well as the emotional responses of judges, jurors, defendants, and victims.

Methodology: The study employs a theory-driven and interdisciplinary conceptual framework grounded in psychological theories of decision-making, procedural justice, and affective processes. Through a critical integrative synthesis of legal, psychological, and ethical scholarship on algorithmic decision-making, predictive modeling, and risk assessment systems in criminal justice, the study examines their implications for human judgment, responsibility attribution, and judicial experience.

Findings: The analysis demonstrates that AI-assisted decision-making can substantially shape psychological perceptions of justice and the legitimacy of judicial processes. Although algorithmic tools are often perceived as consistent and objective, their reliance on historical data may reproduce existing biases, thereby negatively affecting perceived fairness, trust in judicial outcomes, and decision confidence among legal professionals and trial participants. These findings indicate that the psychological impact of artificial intelligence extends beyond technical accuracy and plays a significant role in shaping perceptions of the legitimacy of judicial processes.

Unique Contribution to Theory, Practice, and Policy: This study contributes to psychological theory by offering a systematic examination of the cognitive, affective, and evaluative processes associated with algorithmically supported judicial decision-making in criminal justice. In the context of judicial practice, the analysis demonstrates how uncritical reliance on AI systems may diminish judicial autonomy and obscure responsibility attribution in decision-making processes. From a public policy perspective, the findings contribute to the conceptualization of regulatory approaches oriented toward transparency, fairness, and trust in the use of AI in judicial decision-making.

Downloads

Download data is not yet available.

References

Albright, A. (2024). The Hidden Effects of Algorithmic Recommendations. (unpublished manuscript).https://apalbright.github.io/pdfs/albright-algo-recs-PAPER.pdf

Alim, M., Due, C., & Strelan, P. (2021). Relationship between experiences of systemic injustice and wellbeing among refugees and asylum seekers: A systematic review. Australian Psychologist, 56(3):1-15. https://www.researchgate.net/publication/352799043_Relationship_between_experiences_of_systemic_injustice_and_wellbeing_among_refugees_and_asylum_seekers_a_systematic_review

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias-There’s Software Used across the Country to Predict Future Criminals. And It’s Biased against Blacks. ProPublica, Online Edition. https://www.antoniocasella.eu/nume/Angwin_2016.pdf

Barry, B.M. (2021). How judges judge- Empirical Insights into Judicial Decision-Making. Informa law from Routledge. https://www.routledge.com/rsc/downloads/Chapters_1_and_2.2_from_9780367086244.pdf

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press. https://fairmlbook.org/pdf/fairmlbook.pdf

Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3, 193–209. https://journals.sagepub.com/doi/10.1207/s15327957pspr0303_3

Bennett Moses, L., Legg, M., Silove, J., & Zalnieriute, M. (2023). AI decision-making and the courts: A guide for judges, tribunal members and court administrators (Revised ed.). Australasian Institute of Judicial Administration. https://aija.org.au/wp-content/uploads/2023/12/AIJA_AI-DecisionMakingReport_2023update.pdf

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Journal of Machine Learning Research 81:1–11. https://www.researchgate.net/publication/321745548_Fairness_in_Machine_Learning_Lessons_from_Political_Philosophy

Burrell, J. (2016). How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3, 1–12. https://journals.sagepub.com/doi/10.1177/2053951715622512

Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning. AI & Society, 36:1105-1116.https://link.springer.com/article/10.1007/s00146-021-01162-8

Chen, B. M., Li, Z., Cai, D., & Ash, E. (2024). Detecting the influence of the Chinese guiding cases: A text reuse approach. Artificial Intelligence and Law, 32, 463–486. https://www.researchgate.net/publication/370582932_Detecting_the_influence_of_the_Chinese_guiding_cases_a_text_reuse_approach

Contini, F. (2020). Artificial intelligence and the transformation of humans, law and technology interactions in judicial proceedings. Law, Technology and Humans, 2 (1):4-18. https://www.researchgate.net/publication/341283034_Artificial_Intelligence_and_the_Transformation_of_Humans_Law_and_Technology_Interactions_in_Judicial_Proceedings

Dietvorst, B., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err.Journal of Experimental Psychology: General, 144 (1), 114-126

https://www.researchgate.net/publication/268449803_Algorithm_Aversion_People_Erroneously_Avoid_Algorithms_After_Seeing_Them_Err

Deeks, A. S. (2019). The Judicial Demand for Explainable Artificial Intelligence (August 1, 2019). 119 Colum. L. Rev. __ (2019 Forthcoming), Virginia Public Law and Legal Theory Research Paper No. 2019-51, https://www.columbialawreview.org/wp-content/uploads/2019/11/Deeks-Judical_Demand_for_Explainable_AI.pdf

Edmond, G., White, D., Towler, A., San Roque, M., & Kemp, R. (2022). Facial recognition and image comparison evidence: identification by investigators, familiars, experts, super-recognisers and algorithms. Melbourne University Law Review 45 (1) 99-160. https://espace.library.uq.edu.au/view/UQ:13cb90f

Enarsson, T., Enqvist, L., & Naarttijärvi, M. (2022). Approaching the human in the loop: Legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31(1), 123–153. https://www.diva-portal.org/smash/get/diva2:1582038/FULLTEXT02.pdf

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press.

European Commission for the Efficiency of Justice. (2020). European ethical charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c

European Commission. (2021). Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

Fine, A., Berthelot, E. R., & Marsh, S. (2025). Public perceptions of judges’ use of AI tools in courtroom decision-making: An examination of legitimacy, fairness, trust, and procedural justice. Behavioral Sciences, 15(4), Article 476. https://www.mdpi.com/2076-328X/15/4/476

Friedman, B., & Kahn, P. H., Jr. (2003). Human values, ethics, and design. In J. A. Jacko & A. Sears (Eds.), The human–computer interaction handbook (pp. 1177–1201). Lawrence Erlbaum Associates. https://brandorn.com/img/writing/tech-ethics/human-values-ethics-and-design.pdf

Gawronski, B., Conway, P., Hütter, M., Luke, D.M., Armstrong, J.,& Friesdorf, R. (2020). On the validity of the CNI model of moral decision-making: Reply to Baron and Goodwin (2020). Cambridge University Press. https://www.cambridge.org/core/journals/judgment-and-decision-making/article/on-the-validity-of-the-cni-model-of-moral-decisionmaking-reply-to-baron-and-goodwin-2020/07D1BA6BEEE80D359DCADB1BD4DBE53D

Gupta-Kagan, J. (2018). The intersection between young adult sentencing and mass incarceration. Wisconsin Law Review, 2018, 669–734. https://wlr.law.wisc.edu/wp-content/uploads/sites/1263/2020/02/Gupta-Kagan-Final.pdf

Hedler, L. (2024). Risk and danger in the introduction of algorithms to courts: A comparative framework between EU and Brazil”, Oñati Socio-Legal Series, 14(5), pp. 1315–1336. https://opo.iisj.net/index.php/osls/article/view/1859/2314

Jadidi, V. (2025). The Impact of Artificial Intelligence onJudicial Decision-Making Processes. AJMHSS,1(4). https://www.researchgate.net/publication/393185391_The_Impact_of_Artificial_Intelligence_on_Judicial_Decision-Making_Processes

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-292. https://www.jstor.org/stable/1914185

Khryapchenkova, O. ( 2025). The Impact of Emotional Experience on AI Product Perception, International Journal of Computer Trends and Technology (IJCTT), vol. 73, no. 3, pp. 32-41, 2025. Crossref. https://www.ijcttjournal.org/2025/Volume-73%20Issue-3/IJCTT-V73I3P104.pdf

Keddell, E. (2019). Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice," Social Sciences, MDPI, vol. 8(10), pages 1-22. https://ideas.repec.org/a/gam/jscscx/v8y2019i10p281-d274114.html

Korteling, J. E. (H.), van de Boer-Visschedijk, G. C., Blankendaal, R., Eikelboom, A. R., & van der Meer, S. (2021). Human- versus artificial intelligence. Frontiers in Artificial Intelligence, 4, Article 622364. https://www.researchgate.net/publication/350375878_Human-_versus_Artificial_Intelligence

Kim, N. J., & Kim, M. K. (2022). Teacher’s perceptions of using an artificial intelligence–based educational tool for scientific writing. Frontiers in Education, 7, Article 755914.https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2022.755914/full

Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2765268

Kunst, M., & Popelier, L. (2015). Victim satisfaction with the criminal justice system and emotional recovery: A systematic and critical review. Trauma, Violence, & Abuse, 16(3), 1-23. https://www.researchgate.net/publication/267741931_Victim_Satisfaction_With_the_Criminal_Justice_System_and_Emotional_Recovery

Lipton, Z. C. (2016). The mythos of model interpretability. Communications of the ACM,61(10):96-100. https://www.researchgate.net/publication/303942775_The_Mythos_of_Model_Interpretability

Lum, K., & Isaac, W. (2016). To Predict and Serve? Significance, 13(5), 14-19.

https://doi.org/10.1111/j.1740-9713.2016.00960.x

Martinho, A. (2025). Surveying judges about artificial intelligence: Profession, judicial adjudication, and legal principles. AI & Society, 40, 569–584. https://www.researchgate.net/publication/378436460_Surveying_Judges_about_artificial_intelligence_profession_judicial_adjudication_and_legal_principles

Minson, J.A., Mueller, J. S., Larrick, R. P. (2017). The Contingent Wisdom of Dyads: When Discussion Enhances vs. Undermines the Accuracy of Collaborative Judgments. Management Science 64(9):4177-4192.https://doi.org/10.1287/mnsc.2017.2823

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://www.researchgate.net/publication/280685490_Confirmation_Bias_A_Ubiquitous_Phenomenon_in_Many_Guises

Obermeyer, Z., Powers B., Vogeli, C., Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science 25;366(6464):447-453. https://www.science.org/doi/10.1126/science.aax2342

Osoba, O.A., Welser, W. (2017). The Risks of Artificial Intelligence to Security and the Future of Work. https://www.rand.org/pubs/perspectives/PE237.html file:///C:/Users/user/Downloads/RAND_PE237.pdf

O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishers.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Peluso Lopes, G. (2025). Bias in adjudication and the promise of AI: Challenges to procedural fairness. Law, Technology and Humans, 7(1), 47. https://repository.tilburguniversity.edu/server/api/core/bitstreams/189d57e1-574e-4d69-9890-392d27efa57e/content

Priyansh, & Saggu, A. K. (2025). Building trust in AI systems: A study on user perception and transparent interactions. International Journal on Science and Technology, 16(1). https://www.ijsat.org/papers/2025/1/2118.pdf

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*), 33–44. https://www.nist.gov/system/files/documents/2021/08/23/ai-rmf-rfi-0038.pdf

Rawls, J. (1999). A theory of justice, revised edition. The Belknap Press of Harvard University Press Cambridge, Massachusetts. https://www.cita.or.id/wp-content/uploads/2016/06/John-Rawls-A-Theory-of-Justice-Belknap-Press-1999.pdf

Roth, A. (1972). Machine Testimony. The yale law journal 126. https://yalelawjournal.org/pdf/RothFinal_c4o97on1.pdf

Ristovska, S. (2023). Ways of seeing: The power and limitation of video evidence across law and policy. First Monday, 28(7). https://firstmonday.org/ojs/index.php/fm/article/view/13226/11048

Sullivan, M. J. L., Scott, W., & Trost, Z. (2012). Perceived injustice: A risk factor for problematic pain outcomes. The Clinical Journal of Pain, 28(6), 484–488. https://sullivan-painresearch.mcgill.ca/publications/2012/CJP_28_484-488.pdf

Sarkki, P., Tomas, F., & Antheunis, M. (2025). Artificial intelligence in the criminal justice system: Systematic literature review and a research agenda (Unpublished manuscript). https://www.researchgate.net/publication/396032797_Artificial_Intelligence_in_the_Criminal_Justice_System_Systematic_Literature_Review_and_a_Research_Agenda

Simon, H. A. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99–118. https://www.jstor.org/stable/1884852

Tyler, T. R. (2006). Why people obey the law. Princeton University Press.

Trost, Z., Agtarap, S., Scott, W., Driver, S., Guck, A., Roden-Foreman, K., Reynolds, M.,Foreman, M. L., & Warren, A. M. (2015). Perceived Injustice After Traumatic Injury:Associations With Pain, Psychological Distress, and Quality of Life Outcomes 12 Months AfterInjury. Rehabilitation Psychology, 60(3):213-221. https://www.researchgate.net/publication/280219600_Perceived_Injustice_After_Traumatic_Injury_Associations_With_Pain_Psychological_Distress_and_Quality_of_Life_Outcomes_12_Months_After_Injury

Surden, H. (2019). Artificial intelligence and law: An overview. 35 GA. ST. U. L. REV. 1305 available at https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=2340&context=faculty-articles

Susskind, R. E. (2019) Online Courts and the Future of Justice. Oxford: Oxford University Press.

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

Zahra, Y., & Amirah. (2024). Ethical and legal implications of AI in decision-making processes. Jurnal Sistem Informasi dan Teknik Informatika, 2(2), 49–54. https://www.researchgate.net/publication/389082488_Ethical_and_Legal_Implications_of_AI_in_Decision-Making

Zarsky, T. (2013). Transparent Predictions. University of Illinois Law Review, Vol. 2013, No. 4. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2324240

Zou, J., & Schiebinger, L. (2018). AI Can Be Sexist and Racist-It’s Time to Make It Fair. Nature, 559(7714), 324-326. DOI: 10.1038/d41586-018-05707-8

Downloads

Published

2026-02-24

How to Cite

Čekić, E. (2026). Psychological Implications of Artificial Intelligence in Judicial Decision-Making and Criminal Sentencing. International Journal of Psychology, 11(1), 36–63. https://doi.org/10.47604/ijp.3659

Issue

Section

Articles

Similar Articles

1 2 3 4 5 > >> 

You may also start an advanced similarity search for this article.