Can Artificial Intelligence solve the problems of the Indian justice system?

There is much to learn from the faster adoption of AI by other countries. Trials and real-world usage have flagged both benefits and concerns. But in the long run, human judges and lawyers currently dispensing justice could find themselves redundant

Justice RS Chauhan
March 13, 2023 / 05:18 PM IST

There are concerns that since AI uses metadata for taking decisions, it is likely to be affected by the bias contained in the metadata. (Illustration by Suneesh K.)

Artificial Intelligence has by now permeated every aspect of human life. It is but natural for AI to enter the legal arena. So let us consider the introduction of AI in the Indian legal fraternity, its impact and future, and lastly examine some of the ethical issues, and the standards applicable to AI in the judiciary.

The COVID-19 pandemic prompted the shift to virtual hearings from the Supreme Court to taluka courts. The judiciary finally woke up to the benefits of information and communication technologies. Approval for e-filing of court documents, live streaming and live transcription of court documents have been epochal events of the last few years.

India’s Slow Progress

The large pendency of cases: 69,000 in the Supreme Court, 60 lakh in the High Courts, and 5 crore cases in the district courts, is making a strong case for AI introduction. SC has already established the Artificial Intelligence Committee.

There is hope that AI could untangle the administrative web, digitalise court records, help judges in quickly analysing cases, aid legal research, permit the assigning of routine work to computers, fast-track justice for the people, and make access to justice more inexpensive and speedier.

A pilot project called the Supreme Court Portal for Assistance in Court Efficiency (SUPACE), an AI enabled programme which reads files of criminal appeals, condenses them to the relevant facts, and documents and provides them to the judges, is being trialled. This process is bound to save judicial time in deciding criminal cases.

Another AI programme called Supreme Court Vidhik Anuvaad Software (SUVAS) translates the judgments from English to the vernacular languages. Such translations would enable the litigants to clearly understand the scope and ambit, the reasons and logic of the judgments.

Other Jurisdictions

Other countries are on a faster trajectory. To predict the behaviour of criminals, England has introduced the Harm Assessment Risk Tool (HART) and the USA, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). Both these programmes profile the offender and help the judge in assessing the likelihood of the offender repeating the crime. Such information is vital while dealing with the bail or parole of an offender. Both these programmes preclude the release of a potentially dangerous offender into the society.

Both in Europe and USA, AI is also used for predicting judicial opinion. AI studies the past judgments of the judge, his/her personality, and predicts the possible outcome. This is now known as ‘predictive justice’. In the United States, a group of American academics claims that it can predict the outcome of a case in the US Supreme Court with an accuracy of 70.2 percent, and the voting pattern of a judge with 71.9 percent accuracy.

From advising  potential litigants about the success of their case, or about the quantum he/she is likely to receive from a court as damages to answering questions from litigants on court procedures, AI is improving access to justice in a number of countries.

China has developed robot lawyers who are fully capable of arguing cases before human judges; Estonia is using robot judges for adjudicating small claims. Human beings are slowly being replaced by robots.

Six Concerns

But there are also concerns. Since AI uses metadata for taking decisions, it is likely to be affected by the bias contained in the metadata. In America, for example, it is seen that while predicting the future behaviour of a criminal, the AI programme is harsher on the Afro-American offender, than on the white offender. Such biases can tilt the scales of justice against a particular individual or community. Thus, the veracity of the metadata, and the authenticity of the AI decision becomes questionable.

Secondly, programmes like chatbots do not necessarily state the truth. They construct the sentence on the basis of the greatest possible next word in the sentence. Hence, false information can be given by the chatbot. This again raises issues about the veracity of information.

Thirdly, if misinformation does exist, it may lead to injustice, or unfairness creeping into the legal system. Such a scenario raises the possibility of violation of human rights.

Fourthly, since metadata is being stored, the issue with regard to the safety of the vast data arises.

Fifthly, issues with regard to privacy arise in case the metadata is breached.

Lastly, the frightening scenario of AI robots replacing human agency, such as lawyers and judges, rises like a spectre. Doing justice is not always about hard facts and cold decisions. It also includes human sensibilities, empathy, understanding of the history, culture and vision of the people. The ultimate issue would be, are the machines capable of behaving like human beings?

And Five Principles

The Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has enunciated five principles:

* Respect for fundamental rights. Ensure that design and implementation of AI services and tools are compatible with the fundamental rights like privacy, equality, and fair trial.

* Equal treatment of the litigant should be ensured by the AI programme. The possibility of the AI being prejudiced against certain segments of society should be eliminated.

* Data security should be fool-proof.

*Transparency should be built into the system. Not only the data used for making a decision should be verified, but also the decision so made must reveal the data used for making the decision. In Holland, the case law has made transparency in the decision-making process a sine qua non.

* Human control over the AI should be maintained. AI should supplement human thinking and decision making; it should not supplant it. The quality of the data fed to the machine, the outcome of the decision made by the machine should be subject to human sensibility and control. The human decision maker should have the freedom to disagree or to deviate from the decision of a computer. The human mind should not be made subordinate or subservient to the robot.

The ultimate question in the world of AI is whether human beings would become redundant, and thus, a disposable commodity? Although today we believe that we are indispensable, we merely delude ourselves. With robot judges and lawyers, the human judges and lawyers would be like the appendix; a relic of the past, but useless.

Justice RS Chauhan (Retd.) was Chief Justice of the Telangana and Uttarakhand High Courts. Views are personal and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day
Justice RS Chauhan was Chief Justice of the Telangana and Uttarakhand High Courts
Tags: #AI #Artificial Intelligence #India #opinion #Technology
first published: Mar 13, 2023 05:18 pm