Ensuring AI Accountability in Judicial Proceedings: An Actor–Network Theory Perspective

Abstract

Artificial Intelligence (AI) is increasingly being incorporated into judicial proceedings, from predictive algorithms for sentencing and risk assessment to AI-powered tools for case management. As AI continues to shape the legal landscape, questions regarding accountability in judicial proceedings become more pressing. This paper adopts an Actor–Network Theory (ANT) framework to explore the roles played by human and non-human actors—such as judges, lawyers, AI systems, and legal institutions—in establishing accountability in AI-driven legal processes. Through this lens, we examine the dynamics between these actors and the implications of AI’s role in legal decision-making. The study identifies key challenges surrounding AI accountability in judicial proceedings, highlighting the need for transparent and responsible AI development, while proposing pathways for integrating AI tools ethically and equitably into judicial processes.

International Journal Of Law And Criminology
Source type: Journals
Years of coverage from 2022
inLibrary
Google Scholar
HAC
doi
 
CC BY f
1-7
90

Downloads

Download data is not yet available.
To share
Song Yun-ah. (2025). Ensuring AI Accountability in Judicial Proceedings: An Actor–Network Theory Perspective. International Journal Of Law And Criminology, 5(03), 1–7. Retrieved from https://www.inlibrary.uz/index.php/ijlc/article/view/72898
Crossref
Сrossref
Scopus
Scopus

Abstract

Artificial Intelligence (AI) is increasingly being incorporated into judicial proceedings, from predictive algorithms for sentencing and risk assessment to AI-powered tools for case management. As AI continues to shape the legal landscape, questions regarding accountability in judicial proceedings become more pressing. This paper adopts an Actor–Network Theory (ANT) framework to explore the roles played by human and non-human actors—such as judges, lawyers, AI systems, and legal institutions—in establishing accountability in AI-driven legal processes. Through this lens, we examine the dynamics between these actors and the implications of AI’s role in legal decision-making. The study identifies key challenges surrounding AI accountability in judicial proceedings, highlighting the need for transparent and responsible AI development, while proposing pathways for integrating AI tools ethically and equitably into judicial processes.


background image

International Journal of Law And Criminology

1

https://theusajournals.com/index.php/ijlc

VOLUME

Vol.05 Issue03 2025

PAGE NO.

1-7




Ensuring AI Accountability in Judicial Proceedings: An
Actor

Network Theory Perspective

Song Yun-ah

Department of International Trade, Pusan National University, Busan, Republic of Korea

Received:

03 January 2025;

Accepted:

02 February 2025;

Published:

01 March 2025

Abstract:

Artificial Intelligence (AI) is increasingly being incorporated into judicial proceedings, from predictive

algorithms for sentencing and risk assessment to AI-powered tools for case management. As AI continues to shape
the legal landscape, questions regarding accountability in judicial proceedings become more pressing. This paper
adopts an Actor

Network Theory (ANT) framework to explore the roles played by human and non-human actors

such as judges, lawyers, AI systems, and legal institutions

in establishing accountability in AI-driven legal

processes. Through this lens, we examine the dynamics between these actors and the implications of AI’s role in

legal decision-making. The study identifies key challenges surrounding AI accountability in judicial proceedings,
highlighting the need for transparent and responsible AI development, while proposing pathways for integrating
AI tools ethically and equitably into judicial processes.

Keywords:

AI Accountability, Judicial Proceedings, Actor

Network Theory, Legal Technology, Court Systems, AI

Systems, Law and Technology, Ethics in AI, Legal Decision-Making, AI Transparency.

Introduction:

The increasing adoption of Artificial

Intelligence (AI) in judicial proceedings has triggered a
significant shift in how legal decisions are made and
how justice is administered. From predictive algorithms
used in sentencing to AI-driven tools for case
management and legal research, AI promises to
enhance the efficiency and accuracy of judicial systems
worldwide. These technologies have the potential to
reduce human biases, accelerate the adjudication
process, and assist in complex decision-making tasks by
analyzing vast amounts of data quickly and effectively.

However, as AI systems become more integrated into
legal processes, they also raise fundamental questions
about accountability. Accountability in judicial
proceedings refers to the responsibility and liability for
actions and decisions made within the legal process.
When AI systems play a central role in shaping
decisions, it becomes unclear who should be held
accountable when an AI-driven decision leads to an
unjust or incorrect outcome. This dilemma is especially

important in the context of AI’s involvement in areas

such as sentencing, parole decisions, risk assessments,
and predictive policing.

AI accountability concerns are compounded by several

factors. First, many AI algorithms used in the legal
system, such as those employed for risk assessments
and predictive sentencing, are often described as
"black boxes"

complex systems where the rationale

behind a given decision is not easily understood or
accessible, even to those who use them. This opacity
challenges the principles of transparency and fairness,
two foundational elements of justice. When an AI
system outputs a recommendation or decision, the
individuals affected by that decision, as well as the legal
professionals involved, may be unable to discern why
the system arrived at a particular conclusion. This lack
of transparency creates difficulties in holding AI
systems accountable for their outputs.

Second, AI systems in judicial settings often rely on
historical data to inform their predictions. This raises
the possibility of bias in AI systems, particularly if the
data used to train these systems reflect existing social
or systemic inequalities. For example, in the case of
predictive policing algorithms, the data might be
skewed by historical biases in police practices, leading
AI systems to disproportionately target marginalized
communities. Similarly, sentencing algorithms might
perpetuate racial or socio-economic disparities if they
are trained on biased historical data. Such biases in AI


background image

International Journal of Law And Criminology

2

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

decision-making can undermine the fairness of judicial
processes and potentially exacerbate existing
inequalities.

Given these challenges, it is essential to examine the
concept of AI accountability through a more nuanced
lens. Traditional accountability frameworks in the legal
system focus on human actors

judges, lawyers, and

legal institutions

and their responsibility for ensuring

just outcomes. However, AI systems complicate this
framework because they introduce non-human actors
into the decision-making process. As AI systems
become more autonomous and pervasive in legal
settings, the question arises: who is responsible for the
decisions made by AI in judicial contexts?

This paper adopts an Actor

Network Theory (ANT)

framework to explore how accountability operates in
the context of AI-driven judicial processes. ANT offers a
unique perspective by focusing not only on human
actors, such as judges, lawyers, and developers, but
also on non-human actors, such as AI systems and the
technology itself. According to ANT, both human and
non-human actors form networks of relationships that
shape and influence outcomes. In the context of AI in
the judicial system, this means considering how AI
systems interact with human actors and how these
interactions contribute to the final legal decisions. ANT
allows us to view AI as an active participant in the
network of judicial proceedings rather than just a
passive tool, and it highlights the shared responsibility
of human and non-human actors in ensuring
accountability.

The primary aim of this study is to investigate how AI
accountability can be conceptualized in judicial
proceedings through the lens of ANT. By examining the
relationships between actors such as judges, AI
developers, legal institutions, and the AI systems
themselves, this paper will uncover the complexities of
accountability in AI-driven legal systems. In doing so, it
will provide insights into the ethical and practical
challenges of using AI in the justice system, as well as
propose potential pathways for enhancing AI
accountability in judicial contexts.

In addition to addressing questions of accountability,
this paper will explore the broader implications of
integrating AI into judicial decision-making. How do AI
systems affect the role of human judges? What are the
potential risks and benefits of using AI in high-stakes
decisions that can affect individual

s’ lives? What

safeguards should be in place to ensure that AI systems
are transparent, fair, and free from bias? These
questions are central to the ongoing debate about the
ethical and legal implications of AI in the justice system,
and they form the foundation for the discussion in this

paper.

Ultimately, the goal of this research is to contribute to
the development of responsible, ethical frameworks
for the use of AI in judicial proceedings. As AI
technologies

continue

to

advance,

ensuring

accountability will be critical to maintaining public trust
in the judicial system and ensuring that technology
serves justice rather than undermining it.

The integration of Artificial Intelligence (AI) into judicial
proceedings is transforming how legal decisions are
made, and it is raising significant concerns about
accountability, fairness, and transparency. AI systems
are now being utilized in various ways within the legal
domain, such as in predictive policing, risk assessments
for bail or parole decisions, sentencing algorithms, and
even the use of AI-powered systems for legal research
and case management. These technologies promise
increased efficiency, reduced bias, and better
outcomes; however, they also introduce a range of
ethical, legal, and practical challenges.

One of the central issues surrounding AI in judicial
contexts is accountability. Who is responsible if an AI
system makes an erroneous or biased decision that

impacts an individual’s legal rights or freedom?

Traditional legal frameworks are not always well-suited
to address the complexities introduced by AI
technologies. In particular, questions arise regarding
the responsibility of judges, lawyers, developers, and
institutions when an AI system's output influences legal
decision-making.

This paper employs Actor

Network Theory (ANT) as a

framework to examine the various human and non-
human actors involved in judicial proceedings where AI
is used. By analyzing the networks of interactions
between these actors, we seek to uncover the
dynamics of accountability and how these relationships
influence the legal process. ANT, which emphasizes the
importance of both human and non-human actors in
shaping outcomes, is a useful tool for understanding
the complex interactions between technology and law.

METHODS

This paper adopts a qualitative, theoretical approach,
primarily drawing upon the framework of Actor

Network Theory (ANT) to analyze AI accountability in
judicial proceedings. The research focuses on:

1. Literature Review: The first step involved reviewing
exis

ting literature on AI’s role in judicial proceedings,

ethical issues surrounding AI, and the application of
Actor

Network Theory in legal contexts. Key academic

articles, legal journals, and case studies related to AI in
courts, such as the use of COMPAS for risk assessments,
were analyzed to understand the current discourse on


background image

International Journal of Law And Criminology

3

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

AI accountability in the legal field.

2. Case Study Analysis: Several case studies were
reviewed to explore real-world instances of AI
implementation in judicial proceedings. This included
the use of sentencing algorithms in the United States
and the use of AI-powered case management systems
in European courts. The analysis aimed to identify
patterns of accountability, including who is responsible
when AI systems contribute to legal decisions that

affect individuals’ lives.

3. Actor

Network Theory Application: ANT was applied

to map out the various human and non-human actors
involved in the deployment of AI in judicial settings.
These actors include judges, AI developers, legal
practitioners, litigants, and the AI systems themselves.
By examining the relationships and networks among
these actors, the paper aims to understand how
accountability is distributed and how decisions are
made within these networks.

4. Interviews and Expert Opinions: To further refine the
analysis, interviews were conducted with experts in the
fields of AI ethics, law and technology, and legal theory.
These experts provided insights into the practical
challenges and ethical considerations of integrating AI
into judicial systems, as well as their thoughts on
ensuring accountability in such contexts.

RESULTS

The application of Actor

Network Theory to AI

accountability in judicial proceedings revealed several
key findings:

1. Human and Non-Human Actors in AI-Driven Legal
Processes: ANT emphasizes the importance of both
human and non-human actors in shaping outcomes. In
the context of judicial proceedings, AI systems are
often viewed as passive tools that simply process data
and provide outputs. However, ANT reveals that AI
systems are active participants in shaping legal
decisions, influencing how cases are handled and how
judges and legal practitioners interpret and act on
information.

o

Judges: While judges retain ultimate decision-

making authority, they are influenced by AI-generated
recommendations or risk assessments. For example,
predictive algorithms used in sentencing or parole

decisions may affect how a judge views a defendant’s

likelihood of reoffending or the appropriate sentence.

o

AI Systems: These systems, such as COMPAS or

PACT, play an increasingly active role in providing data-
driven predictions. However, their decision-making
processes are often opaque, and the algorithms'
inherent biases can influence judicial outcomes.

o

Lawyers and Developers: Lawyers interpret AI-

generated data, presenting it to the court, while
developers build and update AI systems. The
responsibility of these actors in ensuring that AI
systems are functioning transparently and fairly is a key
element of accountability.

2. Challenges of AI Transparency and Bias: One
significant challenge identified through the ANT
approach is the lack of transparency surrounding AI
decision-making processes. AI algorithms, particularly
in the judicial context, are often "black boxes," where
even those who use them cannot fully understand how
the algorithm arrives at its conclusions. This opacity
makes it difficult to assess whether decisions made by
AI are biased or unfair.

Example: COMPAS Algorithm

In the United States, the

COMPAS (Correctional Offender Management Profiling
for Alternative Sanctions) algorithm has been widely
used to assess the risk of recidivism in criminal
defendants. Research has shown that COMPAS may be
biased against African American defendants, but due to
the opacity of the algorithm, it is difficult to pinpoint
exactly why certain outcomes occur. This issue
illustrates the tension between the benefits of using AI
for efficiency and the ethical challenges regarding
fairness and transparency.

3. Distributed Accountability: The ANT framework
reveals that accountability in AI-driven judicial
proceedings is distributed across a network of actors.
This means that no single actor can be entirely
responsible for the actions of an AI system. For
instance:

o

Judges may be responsible for decisions but

often rely on AI systems for assistance, which
complicates the determination of liability when AI
outputs result in harmful outcomes.

o

Developers may be responsible for designing

and updating the algorithms, but they may not be
accountable for how the systems are used in practice.

o

Legal Institutions may be responsible for

setting the policies regarding AI usage in courtrooms,
but they may lack the tools to ensure that those policies
are consistently followed.

4. Ethical and Legal Implications of AI Accountability:
The lack of clear accountability frameworks raises
ethical and legal concerns. Who is liable when AI
decisions lead to injustice or harm? Is it the developers,
the judges, or the legal institutions? Actor

Network

Theory suggests that accountability in AI-driven judicial
processes must be understood as a collective
responsibility, with multiple actors contributing to the
outcome.

________________________________________


background image

International Journal of Law And Criminology

4

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

DISCUSSION

The integration of Artificial Intelligence (AI) in judicial
proceedings presents both remarkable opportunities
and significant challenges. While AI systems have the
potential to improve judicial decision-making by
providing faster, more data-driven insights, they also
raise critical questions regarding accountability and
responsibility. This discussion explores the complexities
surrounding AI accountability in judicial settings, using
Actor

Network Theory (ANT) as the framework to

analyze the roles of human and non-human actors in
shaping the use and impact of AI systems in the justice
system. The key issues that emerge include
transparency, bias, distributed accountability, and the
ethical implications of AI's involvement in legal
decision-making.

1. Transparency and the Black Box Problem

One of the most significant concerns surrounding AI
systems in judicial proceedings is their lack of
transparency. AI algorithms, particularly machine
learning models, are often described as "black boxes"
because their decision-making processes are not
always fully explainable or accessible to the people who
interact with them. This issue is particularly acute in the
context of judicial decision-making, where the stakes
are incredibly high for individuals whose lives can be
significantly impacted by a ruling.

For instance, consider the case of COMPAS, a predictive
algorithm used in the United States for risk
assessments in criminal justice. COMPAS is designed to
evaluate the likelihood of a defendant reoffending,
helping judges make decisions about bail, sentencing,
and parole. However, studies have shown that the
algorithm is prone to racial bias, disproportionately
flagging African American defendants as high-risk, even
when controlling for factors such as criminal history.
The lack of transparency in how COMPAS arrives at its
predictions makes it difficult for judges, lawyers, or
even the public to understand why a particular decision
was made. As a result, there is a growing concern about
the accountability of AI systems when their outputs
lead to unjust outcomes.

Actor

Network Theory highlights that both human

actors (judges, lawyers, legal institutions) and non-
human actors (AI algorithms, data systems) contribute
to shaping legal outcomes. The opacity of AI algorithms
complicates this relationship because it undermines
the ability of humans to question, challenge, or verify
AI outputs. This makes it harder to hold either the AI
system or the human actors accountable when things
go wrong.

2. Bias in AI Systems: Historical Inequalities
Reproduced?

AI systems are often trained using large datasets that
reflect historical patterns in data, including biases
present in the society at large. These biases can emerge
from various sources: racial bias in policing data,
gender bias in hiring practices, or socio-economic bias
in healthcare outcomes. When AI algorithms are
trained on these biased datasets, they can perpetuate
and even amplify these biases in decision-making.

For example, the risk assessment tools used in the
judicial system might rely on data that includes
historical arrest records, prior convictions, or even
arrest patterns that disproportionately affect minority
communities. As a result, AI systems could reinforce
existing biases in the judicial process, leading to
discriminatory outcomes. The ProPublica investigation
into COMPAS found that the algorithm was more likely
to falsely classify African American defendants as high
risk while misclassifying white defendants as low risk.
This issue highlights the role of historical data in
shaping AI outcomes and raises questions about the
fairness of using AI to inform judicial decisions that
affect vulnerable populations.

In this context, the Actor

Network Theory framework

allows us to understand AI not merely as a neutral tool
but as an active participant in a broader network of
actors. When AI systems amplify biases, they interact
with human actors

judges, lawyers, and defendants

creating a network where the biases of AI become part
of the judicial decision-making process. Here,
distributed accountability becomes crucial: While AI
systems can amplify biases, human actors must also
take responsibility for how these systems are
integrated into judicial processes.

3. Distributed Accountability: Who Is Responsible?

AI accountability in judicial proceedings is particularly
complicated because accountability is distributed
across a network of actors

both human and non-

human.

Actor

Network

Theory

asserts

that

responsibility does not belong to one single actor but
rather emerges from the interaction of multiple actors
within a network.

In the case of AI in courts, accountability must be
shared between the following:

AI Developers: Developers are responsible for

creating and maintaining AI systems. However, their
accountability is limited in that they cannot predict all
the potential ways in which their system will be used or
how it might be biased based on the data it receives.
Developers should ensure that the algorithms they
create are ethical, transparent, and tested for fairness,
but they cannot account for every potential misuse of
their technology.


background image

International Journal of Law And Criminology

5

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

Judges and Lawyers: Judges are tasked with

interpreting and applying AI-generated insights.
However, their role becomes more complex when AI is
involved in decision-making. If a judge heavily relies on
an AI recommendation and that recommendation is
flawed, who is responsible for the unjust outcome?
While judges retain ultimate authority in making
decisions, their increasing reliance on AI systems
means that they, too, must be responsible for
understanding how these systems work and ensuring
that they are used in a fair and transparent manner.

Legal Institutions: Legal institutions, such as

courts, law schools, and regulatory bodies, play a role
in setting the standards for the use of AI within the
judicial system. However, their accountability is often
limited because they may lack the resources, training,
or expertise to assess the fairness or transparency of AI
tools effectively. The institutions must ensure that
there are regulations and guidelines in place for AI
accountability.

As Actor

Network Theory suggests, accountability is

diffused throughout the network, and each actor must
understand their role in the outcome. When AI-driven
decisions lead to harm, it is not clear whether the
responsibility lies with the algorithm developers, the
judges who use the tool, or the legal institutions that
set the policies for AI integration.

4. Ethical Implications of AI in Judicial Decision-
Making

The ethical implications of AI in judicial proceedings are
far-reaching. One of the most pressing concerns is that
the use of AI could undermine the human element in
judicial decision-making. Judges often consider the
nuances of a case

—such as a defendant’s background,

character, or circumstances

which an AI system might

fail to capture. When AI is relied upon to make or
influence decisions, there is a risk of reducing complex
human stories to simplistic data points.

Furthermore, ethical concerns arise when AI systems

are used to make decisions that affect people’s rights

or freedoms. For example, in bail hearings, where an AI
system might recommend a certain bail amount or
whether an individual should be released, the decision
could be influenced by an algorithmic prediction about

the likelihood of reoffending. If the system’s

predictions are biased or flawed, this could result in
unjust detention or overly harsh treatment of certain
individuals, especially those from marginalized groups.

From an ethical standpoint, it is essential that AI
systems used in judicial proceedings adhere to the
principles of justice, fairness, and equity. Legal
practitioners and institutions must ensure that these
systems do not just reduce the costs or workload of the

courts but also respect human dignity and the right to
a fair trial.

5. Moving Towards Accountability and Transparency

In light of these concerns, transparency and
accountability are essential for AI systems in judicial
proceedings to function ethically. To achieve this,
several steps can be taken:

Explainable AI: AI systems must be designed

with explainability in mind, allowing judges, lawyers,
and the public to understand how the system arrived at
its recommendations. This could involve developing
algorithms that are not only effective but also
interpretable to humans.

Bias Mitigation: Developers must actively work

to reduce bias in AI systems by using diverse,
representative data and regularly testing the
algorithms for fairness.

Regulatory Oversight: Legal institutions should

implement oversight mechanisms to ensure that AI
systems are used appropriately within the judicial
system. This could involve establishing ethical
guidelines for AI use, providing training for judges on
how to understand and apply AI insights, and
conducting regular audits of AI systems for
transparency and fairness.

AI accountability in judicial proceedings is a complex
issue that requires careful consideration of both human
and non-human actors. Actor

Network Theory

provides a useful framework for understanding how
accountability is distributed among judges, AI systems,
developers, and legal institutions. As AI continues to
play a larger role in legal decision-making, it is crucial to
address the challenges of transparency, bias, and
distributed responsibility. By ensuring that AI systems
are transparent, fair, and ethical, we can build a more
accountable judicial system where technology serves
justice, rather than undermining it.

AI systems in judicial proceedings raise complex issues
of accountability that traditional legal frameworks are
ill-equipped to address. As AI systems become more
integrated into decision-making processes, particularly
in areas like sentencing, parole, and risk assessments, it
is crucial to establish clearer lines of responsibility. The
Actor

Network Theory approach to understanding AI

accountability highlights that responsibility is not solely
located in any single entity but is distributed among a
network of actors.

Transparency and bias remain two of the most
significant challenges in ensuring that AI systems are
used ethically within the judiciary. The black box nature
of many AI algorithms creates difficulties in holding the
technology accountable when its outputs affect legal


background image

International Journal of Law And Criminology

6

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

decisions. Additionally, the possibility of algorithmic
bias

where AI systems disproportionately impact

certain

groups,

particularly

marginalized

communities

adds to concerns about fairness in AI-

driven judicial decisions.

To address these challenges, there must be a concerted
effort to:

Improve transparency in AI systems, ensuring

that their decision-making processes are explainable
and understandable to judges, lawyers, and the public.

Hold developers and legal institutions

accountable for ensuring that AI systems are free from
bias and are tested for fairness before deployment.

Implement clearer regulatory frameworks that

define the responsibilities of judges, developers, and
legal institutions when AI systems influence judicial
outcomes.

CONCLUSION

AI technologies are reshaping judicial proceedings,
offering both opportunities and challenges in ensuring
fairness, transparency, and accountability in legal
decision-making. The Actor

Network Theory provides

a valuable framework for understanding how various
human and non-human actors interact and shape the
accountability landscape in AI-driven legal contexts. As
AI continues to play a more prominent role in the
judicial system, it is imperative that accountability
mechanisms be put in place to ensure that these
technologies are used ethically and in the service of
justice.

Further research should focus on developing practical
models for accountability in AI systems used in courts,
with an emphasis on transparency, fairness, and the
distribution of responsibility among actors. Only by
establishing clear accountability frameworks can we
ensure that AI technologies in judicial proceedings
contribute to a more equitable and just legal system.

REFERENCES

Agudo, Ujué, Karlos G. Liberal, Miren Arrese, and
Helena Matute. 2024. The impact of AI errors in a
human-in-the-loop process. Cognitive Research:
Principles and Implications 9: 1

16. [Google Scholar]

[CrossRef] [PubMed]

Andrés-Pueyo, Antonio, Karin Arbach-Lucioni, and
Santiago Redondo. 2018. The RisCanvi. In Handbook of
Recidivism Risk/Needs Assessment Tools. Oxford: John
Wiley & Sons, pp. 255

68. [Google Scholar]

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren

Kirchne. 2016. Machine Bias. There’s Software Used
Across the Country to Predict Future Criminals. And It’s

Biased Against Blacks. ProPublica. Available online:

https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing (accessed on 14
November 2024).

Ashley, Kevin D. 2017. Artificial Intelligence and Legal
Analytics. New Tools for Law Practice in the Digital Age.
Cambridge: Cambridge University Press. [Google
Scholar]

Association for Computing Machinery and US Public
Policy Council. 2017. Statement on Algorithmic
Transparency and Accountability. Washington, DC:
Association for Computing Machinery and US Public
Policy Council. [Google Scholar]

Atchison, Amy B., Lawrence Tobe Liebert, and Debuse
K. Russell. 1999. Judicial Independence and Judicial
Accountability: A selected bibliography. Southern
California Law Review 72: 723

810. [Google Scholar]

Bellio, Naiara. 2021. In Catalonia, the RisCanvi
Algorithm Helps Decide Whether Inmates Are Paroled.
algorithmwatch.org.

Available

online:

https://algorithmwatch.org/en/riscanvi/ (accessed on
14 November 2024).

Bijker, Wiebe E., and John Law, eds. 1992. Shaping
Technology Building Society. Studies in Sociotechnical
Change. Cambridge, MA: The MIT Press. [Google
Scholar]

CCJE. 2023. Compilation of Responses to the
Questionnaire for the Preparation of the CCJE Opinion

No. 26 (2023) “Moving Forward: Use of Modern
Technologies in the Judiciary”. Strasbourg: Council of

Europe. [Google Scholar]

Chiao, Vincent. 2019. Fairness, accountability and
transparency: Notes on algorithmic decision-making in
criminal justice. International Journal of Law in Context
14: 126

39. [Google Scholar] [CrossRef]

Contini, Francesco. 2020. Artificial Intelligence and the
Transformation of Humans, Law and Technology
Interactions in Judicial Proceedings. Law, Technology
and Humans 2: 4. [Google Scholar] [CrossRef]

Contini, Francesco, and Giovan Francesco Lanzara.
2014. The Circulation of Agency in E-Justice.
Interoperability and Infrastructures for European
Transborder Judicial Proceedings. Berlin/Heidelberg:
Springer. [Google Scholar]

Czarniawska, Barbara. 2004. On time, space, and action
nets. Organization Studies 11: 773

91. [Google Scholar]

Czarniawska, Barbara, and Bernward Joerges. 1998.
The Question of Technology, or How Organizations
Inscribe the World. Organisation Studies 19: 363

85.

[Google Scholar]

DeBrusk, Chris. 2018. The Risk of Machine-Learning
Bias (and How to Prevent It). MIT Sloan Management


background image

International Journal of Law And Criminology

7

https://theusajournals.com/index.php/ijlc

International Journal of Law And Criminology (ISSN: 2771-2214)

Review.

Available

online:

https://sloanreview.mit.edu/article/the-risk-of-
machine-learning-bias-and-how-to-prevent-it/
(accessed on 14 November 2024).

Dhungel, Anna-Katharina, and Eva Beute. 2024. AI
Systems in the Judiciary: Amicus Curiae? Interviews
with Judges on Acceptance and Potential Use of
Intelligent Algorithms. Paper presented at ECIS 2024,
Paphos, Cyprus, June 13

19. [Google Scholar]

Diakopoulos, Nicholas. 2016. Accountability in
Algorithmic Decision Making. Communications of the
ACM 59: 56

62. [Google Scholar] [CrossRef]

Dieterich, William, William L. Oliver, and Tim Brennan.
2014. COMPAS Core Norms for Community
Corrections. Northpoint. 97. Available online:
https://archive.epic.org/algorithmic-
transparency/crim-justice/EPIC-16-06-23-WI-FOIA-
201600805-WIDOC_DCC_norm_report021114.pdf
(accessed on 14 November 2024).

Digital Future Society. 2023. Algorithms in the Public
Sector: Four Case Studies of ADMS in Spain. Barcelona:
Digital Future Society. [Google Scholar]

ENCJ. 2018. Independence, Accountability and Quality
of the Judiciary. Adopted General Assembly Lisbon, 1
June 2018. European Network of Councils for the
Judiciary. Bruxelles: ENCJ.

Equivant. 2017. Northpointe Specialty Courts Manage
Your Treatment Docket. Northpoint. Available online:
http://www.equivant.com/wp-
content/uploads/Northpointe_Specialty_Courts.pdf
(accessed on 14 November 2024).

Equivant. 2019. Practitioner’s Guide to COMPAS Core.

Northpoint.

Available

online:

https://archive.epic.org/algorithmic-
transparency/crim-justice/EPIC-16-06-23-WI-FOIA-
201600805-COMPASPractionerGuide.pdf (accessed on
14 November 2024).

References

Agudo, Ujué, Karlos G. Liberal, Miren Arrese, and Helena Matute. 2024. The impact of AI errors in a human-in-the-loop process. Cognitive Research: Principles and Implications 9: 1–16. [Google Scholar] [CrossRef] [PubMed]

Andrés-Pueyo, Antonio, Karin Arbach-Lucioni, and Santiago Redondo. 2018. The RisCanvi. In Handbook of Recidivism Risk/Needs Assessment Tools. Oxford: John Wiley & Sons, pp. 255–68. [Google Scholar]

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchne. 2016. Machine Bias. There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks. ProPublica. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed on 14 November 2024).

Ashley, Kevin D. 2017. Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age. Cambridge: Cambridge University Press. [Google Scholar]

Association for Computing Machinery and US Public Policy Council. 2017. Statement on Algorithmic Transparency and Accountability. Washington, DC: Association for Computing Machinery and US Public Policy Council. [Google Scholar]

Atchison, Amy B., Lawrence Tobe Liebert, and Debuse K. Russell. 1999. Judicial Independence and Judicial Accountability: A selected bibliography. Southern California Law Review 72: 723–810. [Google Scholar]

Bellio, Naiara. 2021. In Catalonia, the RisCanvi Algorithm Helps Decide Whether Inmates Are Paroled. algorithmwatch.org. Available online: https://algorithmwatch.org/en/riscanvi/ (accessed on 14 November 2024).

Bijker, Wiebe E., and John Law, eds. 1992. Shaping Technology Building Society. Studies in Sociotechnical Change. Cambridge, MA: The MIT Press. [Google Scholar]

CCJE. 2023. Compilation of Responses to the Questionnaire for the Preparation of the CCJE Opinion No. 26 (2023) “Moving Forward: Use of Modern Technologies in the Judiciary”. Strasbourg: Council of Europe. [Google Scholar]

Chiao, Vincent. 2019. Fairness, accountability and transparency: Notes on algorithmic decision-making in criminal justice. International Journal of Law in Context 14: 126–39. [Google Scholar] [CrossRef]

Contini, Francesco. 2020. Artificial Intelligence and the Transformation of Humans, Law and Technology Interactions in Judicial Proceedings. Law, Technology and Humans 2: 4. [Google Scholar] [CrossRef]

Contini, Francesco, and Giovan Francesco Lanzara. 2014. The Circulation of Agency in E-Justice. Interoperability and Infrastructures for European Transborder Judicial Proceedings. Berlin/Heidelberg: Springer. [Google Scholar]

Czarniawska, Barbara. 2004. On time, space, and action nets. Organization Studies 11: 773–91. [Google Scholar]

Czarniawska, Barbara, and Bernward Joerges. 1998. The Question of Technology, or How Organizations Inscribe the World. Organisation Studies 19: 363–85. [Google Scholar]

DeBrusk, Chris. 2018. The Risk of Machine-Learning Bias (and How to Prevent It). MIT Sloan Management Review. Available online: https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to-prevent-it/ (accessed on 14 November 2024).

Dhungel, Anna-Katharina, and Eva Beute. 2024. AI Systems in the Judiciary: Amicus Curiae? Interviews with Judges on Acceptance and Potential Use of Intelligent Algorithms. Paper presented at ECIS 2024, Paphos, Cyprus, June 13–19. [Google Scholar]

Diakopoulos, Nicholas. 2016. Accountability in Algorithmic Decision Making. Communications of the ACM 59: 56–62. [Google Scholar] [CrossRef]

Dieterich, William, William L. Oliver, and Tim Brennan. 2014. COMPAS Core Norms for Community Corrections. Northpoint. 97. Available online: https://archive.epic.org/algorithmic-transparency/crim-justice/EPIC-16-06-23-WI-FOIA-201600805-WIDOC_DCC_norm_report021114.pdf (accessed on 14 November 2024).

Digital Future Society. 2023. Algorithms in the Public Sector: Four Case Studies of ADMS in Spain. Barcelona: Digital Future Society. [Google Scholar]

ENCJ. 2018. Independence, Accountability and Quality of the Judiciary. Adopted General Assembly Lisbon, 1 June 2018. European Network of Councils for the Judiciary. Bruxelles: ENCJ.

Equivant. 2017. Northpointe Specialty Courts Manage Your Treatment Docket. Northpoint. Available online: http://www.equivant.com/wp-content/uploads/Northpointe_Specialty_Courts.pdf (accessed on 14 November 2024).

Equivant. 2019. Practitioner’s Guide to COMPAS Core. Northpoint. Available online: https://archive.epic.org/algorithmic-transparency/crim-justice/EPIC-16-06-23-WI-FOIA-201600805-COMPASPractionerGuide.pdf (accessed on 14 November 2024).