Zde se nacházíte:
Informace o publikaci
Bias in AI (Supported) Decision Making: Old Problems, New Technologies
| Autoři | |
|---|---|
| Rok publikování | 2025 |
| Druh | Článek v odborném periodiku |
| Časopis / Zdroj | International Journal for Court Administration |
| Fakulta / Pracoviště MU | |
| Citace | |
| www | Open access článku |
| Doi | https://doi.org/10.36745/ijca.598 |
| Klíčová slova | Bias; artificial intelligence; automated decision- making; human rights; fair trial |
| Přiložené soubory | |
| Popis | Recently different regulations and recommendations for the use of AI-based technology, especially in the judiciary, have become more prevalent. One major concern is addressing bias in such systems. In 2016, ProPublica published a damning report on the use of AI-based technology in making decisions about people’s rights and obligations, revealing that such systems tend to replicate and amplify existing biases. The problem arises from the extensive need for training data in machine learning, which is often based on previous data, such as court decisions. For example, the bail system in the US was found to be alarmingly biased against African Americans. Even efforts to avoid mentioning protected characteristics have been shown to be insufficient, as so-called “fairness through unawareness” has been undermined by proxy characteristics. Addressing the bias of AI systems is a crucial issue for the increased involvement of automated means in judicial settings. The following paper examines various biases that might be introduced in AI-based systems, potential solutions and regulations, and compare possible solutions with current approaches of the European Court of Human Rights (ECtHR) towards biases in human judges. The aim is to confront the question of how to approach current bias in judges compared to approaching bias in future AI-based judicial decision-making technology. |