JAKARTA - Researchers from MIT have proposed the use of algorithms in the justice system to make fairer pre-trial decisions after their research found that human judges tend to be biased. The team analyzed more than one million cases in New York City and found that 20 percent of judges made decisions based on age, race, or criminal records of defendants.

The study, published in the Quarterly Journal of Economics, found that the judge's decision was at least 32 percent incompatible with the defendant's true ability to pay the specified amount of guarantees and their real risk of not attending the trial.

Before the defendant is tried for his crimes, the judge holds a pre-trial hearing to determine whether a person can be released before their court case begins or if they are at risk of escaping and need to be detained. If they decide to release someone, they set the price that the person should pay to be released - their guarantee. How do they decide the amount of a person's guarantee and whether they can be released or not depends on individual judges, and this is where human bias comes in, according to study author Professor Ashesh Rambachan.

This study, which covers 1,460,462 court cases from 2008 to 2013 in New York City, found that 20 percent of judges make biased decisions based on a person's race, age, or criminal records. This results in an error of about 30 percent of all guarantee decisions. This could mean that someone is released from prison and trying to escape, or that they decide to detain someone who is not actually at risk of escaping.

Therefore, Professor Rambachan argues that the use of algorithms to replace or enhance judge decision making in pre-trial hearings can make the guarantee system fairer. However, he noted that this depends on the construction of algorithms that suit the desired outcome accurately, which are not currently available.

AI has slowly entered into courtrooms around the world. By the end of 2023, the British government decided that the ChatGPT could be used by judges to write legal decisions. Previously, two algorithms succeeded in emulating legal negotiations, drafting and finalizing contracts deemed legal by lawyers.

However, AI's weakness has also been seen. Earlier this year, Google was criticized for the historically inaccurate image produced by AI Gemini. For example, when the user asked for a Nazi image, the image produced was black in an SS uniform. Google admits that the algorithm does not match the expected purpose.

Other systems, such as OpenAI's ChatGPT, have been proven to commit crimes when left unsupervised. When asked to act as a financial trader in a hypothetical scenario, ChatGPT trades people 75 percent of the time.

While algorithms can be useful when designed and applied correctly, they are not subject to the same standards or laws as humans, according to scholars Christine Moser. Professor Moser, who studies organizational theory at Vrije Universiteit, Netherlands, wrote in a 2022 paper that allowing AI to make assessment decisions could be a dangerous path. Replacing more human systems with AI, he said, 'could replace human judgment in decision making and thus change morality in a fundamentally, possibly irreversible way.'


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)