logo

Time for non-human judges?

AI could have faults, but so do our human judges, writes K Raveendran.

Time for non-human judges?
When two judges or benches of judges considering the same case issue different verdicts applying the same law, then there is something wrong with the law in question. And if there is nothing wrong with the law and the circumstances of the case remain the same, then there is something wrong with the judges who delivered the divergent verdicts. But if one judge or the bench, issuing a particular verdict, questions the other judge or bench delivering a verdict different than theirs, then it is chaos unlimited. And when the judges in question belong to the 'preferred' category on one side and the 'mutiny' team on the other, it is hell broken loose.
Ordinarily, such conditions might appear to be farfetched and most unlikely. But this is exactly what happened in the Supreme Court recently. In a 2014 case involving Pune Municipal Corporation relating to the Land Acquisition Act, a three-judge bench led by then Chief Justice of India R M Lodha, including Justice Kurian Joseph, one of the revolting foursome led by Justice Chelameswar, issued a 2:1 majority verdict that a land acquisition would be deemed to have lapsed if compensation for the acquired land had not been paid to the landowner or deposited with a competent court and retained in the treasury. But in a verdict issued this month, a three-judge bench led by Justice Arun Mishra, one of the so-called 'preferred' benchers, overturned the 2014 judgment saying it had been pronounced without due regard to law and ruled that the Central government had authority over an acquired land even if the compensation was not paid. The latest decision, however, broke the convention that Supreme Court benches of the same numerical strength cannot overrule each other's judgments. In case of any difference, it can only be referred to a bench of larger strength. Anguished by the development, a three-member bench headed by whistleblower judge Justice Madan B Lokur, which included Justice Kurian, criticised Justice Mishra's bench for 'tinkering with judicial discipline'. Justice Kurian went to the extent of saying in the open court that such action would eventually cost the judicial institution. He emphasised that the correctness of a judgment can be doubted but a bench of similar strength cannot hold a judgment rendered by another bench as wrong.
The controversy has reignited the concerns about judicial decisions being marred by bias, bench preferences, lack of transparency, and all those issues raised at the famous, or rather infamous, press conference by the four senior judges. When judges spar in the open on issues other than judicial prudence and propriety, it indicates a deep-seated malady in the system, which has been plaguing our judiciary for some time. We don't expect judges to behave like the man on the street, but when they do, it jolts the very foundation people's trust. It is a signal that we have to start looking for alternatives.
Luckily, new contours of judicial processes are emerging, which might help in instituting a more acceptable and transparent system. Artificial intelligence is one such area. Technology and law are converging in a manner that might make it possible to integrate and even interchange human and non-human roles for better delivery of justice. Artificial intelligence, more understood as algorithms, is already in use for judicial decision-making in some US states, where the algorithms supplement decisions made by the judges in bail cases to determine the risk involved in granting bail.
It has been found that if AI can correctly identify patterns in judicial decision-making, it might be better at using precedent to decide or predict cases. According to reports, an AI judge recently developed by computer scientists at University College London drew on extensive data from 584 cases before the European Court of Human Rights to analyse existing case law and deliver the same verdict in 79 per cent of the time. Apparently, it was found that the European Court judges actually depended more on non-legal facts than on legal arguments. If an AI judge can examine the case record and accurately decide cases based on the facts, human judges could be made free to consider more complex questions. They will also have more time on their hands to do what our Supreme Court judges are currently engaged in. A seriously perceived problem with AI judges is that it raises important ethical issues around bias and autonomy. It is argued that the AI programs may incorporate the biases of their programmers and the humans they interact with. Also that it can behave in surprising and unexpected ways as it learns to behave like the human judges. But the counter-argument to such fears is that the human judges are already biased. After all, if we can put up with the imperfections of our judges, we can also accommodate some inadequacies of their non-human counterparts.
(The views are strictly personal)
K Raveendran

K Raveendran

Our Contributor help bring you the latest article around you


Exclusive

View All

Latest News

View All
Share it
Top