Law reform charity proposes rights-based framework for AI use in justice system
Law reform charity JUSTICE has proposed the first rights-based framework to guide AI use across the UK justice system in a report that has urged the deployment of the technology be guided by a “clear purpose and responsibility”.
In the report ‘AI in our Justice System’ published today (30 January), JUSTICE also stressed that the rule of law and human rights framework should be embedded as the guiding principles of AI use across the sector, including in public authorities.
The past decade has seen the use of AI in the justice system quickly expand to the point where lawyers now use AI for a range of tasks, including administrative assistance, document review, legal research and drafting.
Lawyers, alternative dispute resolution professionals, and repeat litigants like insurance companies can also make use of litigation prediction AI to develop strategies for settling cases.
According to the report, the rise of the technology could help fix the “countless blind spots, persistent inequalities and numerous inefficiencies” plaguing the justice system.
However, it urged that AI use in the sector should involve a rights-based approach drawing on enforceable legal rights.
Its framework proposes two clear requirements for those looking to use AI in the justice system:
- Goal led: Ensure the tool is clearly aimed at improving one or more of the justice system’s core goals of access to justice, fair and lawful decision-making, and transparency.
- Duty to act responsibly: Ensure all those involved in creating and using the tool take responsibility for ensuring the rule of law and human rights are embedded at each stage of its design, development, and deployment.
JUSTICE said the framework is applicable to any area of the justice system – corporate settings, public authorities, civil and family courts, criminal through to administrative back-office functions.
The report added: “The deployment of AI in the justice sector should be the result of careful consideration, and one guided by a clear purpose and responsibility.
“At the heart of this purpose is the embedding of the rule of law and the human rights framework as the guiding principles.
“Our framework, which seeks to embed the rule of law and human rights, sets out a clear pathway to assessing the suitability of any given AI system for use in the justice sector.”
Sophia Adams Bhatti, report co-author and Chair of JUSTICE’s AI programme said: “Given the desperate need to improve the lives of ordinary people and strengthen public services, AI has the potential to drive hugely positive outcomes.
“Equally, human rights and the rule of law drive prosperity, enhance social cohesion, and strengthen democracy. We have set out a framework which will allow for the positive potential of both to be aligned.”
Stephanie Needleman, Legal Director of JUSTICE, added: “AI isn’t a cure-all, and its use carries big risks – to individuals and to our democracy if people lose trust in the law through more scandals like Horizon.
“But it also offers potential solutions. We must therefore find ways to harness it safely in service of well-functioning justice system. Our rights-based framework can help us navigate this challenge.”
Adam Carey