Skip to main navigation menu Skip to main content Skip to site footer

Articles

Vol. 54 No. 1 (2025): Law and Social Bonds nr 1 (54) 2025

Risk Assessment on the Basis of the Council of Europe Framework Convention on AI: the Example of Uses in the Field of Law

DOI
https://doi.org/10.36128/PRIW.VI54.1171
Submitted
December 20, 2024
Published
2025-05-09

Abstract

The development of AI poses both opportunities and threats to human rights, democracy and the rule of law. In response to these challenges, the Council of Europe drafted the Framework Convention on Artificial Intelligence to establish a general legal framework for the use of AI. The Convention's main principles include the requirement for risk assessment, the introduction of security measures, and liability for harm caused by AI. In the context of the legal sector, AI is being used, among other things, to analyze legal documents, automate routine tasks, predict the outcome of court cases, and create legal aid systems. However, the use of AI in the law raises challenges, such as the risk of bias, discrimination, invasion of privacy, and threats to the right to a fair trial. The AI Framework Convention emphasizes the need for compliance with human rights, as well as protection against the undesirable consequences of using AI in the administration of justice. HUDERIA (Human Rights, Democracy and Rule of Law Impact Assessment) is the Council of Europe's proposed "tool" for assessing the impact of AI on human rights, democracy and the rule of law, which plays a key role in ensuring AI's compliance with fundamental principles of justice.

References

  1. Chen Zhisheng, „Ethics and discrimination in artificial intelligence-enabled recruitment practices” Humanities and Social Sciences Communications, 567 (2023). https://doi.org/10.1057/s41599-023-02079-x.
    Show in Google Scholar -->>
  2. Delfino Rebecca, „Deepfakes on Trial: A Call To Expand the Trial Judge’s Gatekeeping Role To Protect Legal Proceedings from Technological Fakery” Hastings Law Journal, nr 2 (2023): 292-348.
    Show in Google Scholar -->>
  3. Dolniak Patrycja, Tomasz Kuźma, Andrzej Ludwiński, Konrad Wasik. Sztuczna inteligencja w wymiarze sprawiedliwości. Między prawem a algorytmami. Warszawa: Wolters Kluwer, 2024.
    Show in Google Scholar -->>
  4. Floridi Luciano, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press, 2021.
    Show in Google Scholar -->>
  5. Hofmann Valentin, Ria Pratyusha Kalluri, Dan Jurafsky, et al., „AI generates covertly racist decisions about people based on their dialect” Nature 633 (2024): 147-154. https://doi.org/10.1038/s41586-024-07856-5.
    Show in Google Scholar -->>
  6. Kaczmarek-Templin Berenika, „Sztuczna inteligencja (AI) i perspektywy jej wykorzystania w postępowaniu przed sądem cywilnym” Studia Prawnicze. Rozprawy i Materiał, nr 2 (2022): 61-78, Doi: 10.48269/2451-0807-sp-2022-2-004.
    Show in Google Scholar -->>
  7. Kálmán Kinga, Laura Olga Kiss, Kitti Mezei, Boldizsár Szentgáli-Tóth, „Oprogramowania oparte na sztucznej inteligencji w sądach na całym świecie: praktyka, perspektywy i wyzwania mające znaczenie dla Węgier i Europy Środkowej” Prawo Mediów Elektronicznych, 2 (2022): 16-27, https://doi.org/10.34616/144673.
    Show in Google Scholar -->>
  8. Knowlton Natalie, „Access to Civil Justice in the Age of AI: Mindsets & Pathways to New Practices” Ohio Northern University Law Review, nr 3 (2024): 533-554.
    Show in Google Scholar -->>
  9. Lopes Giovana, „Artificial intelligence and judicial decision-making: Evaluating the role of AI in debiasing” TATuP – Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis, nr 1 (2024): 8-54. https://www.tatup.de/index.php/tatup/article/view/7103.
    Show in Google Scholar -->>
  10. Lütz Fabian, „The AI Act, gender equality and non-discrimination: what role for the AI office?” ERA Forum, 25 (2024): 79-95. https://doi.org/10.1007/s12027-024-00785-w.
    Show in Google Scholar -->>
  11. Mantelero Alessandro, Maria Esposito, „An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems” Computer Law & Security Review, t. XLI (2021): 1-35.
    Show in Google Scholar -->>
  12. Mantelero Alessandro, „The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template” Computer Law & Security Review, vol. LIV (2024). https://doi.org/10.1016/j.clsr.2024.106020.
    Show in Google Scholar -->>
  13. Marchant Gary, „AI in Robes: Courts, Judges, and Artificial Intelligence” Ohio Northern University Law Review, nr 3 (2024): 473-490.
    Show in Google Scholar -->>
  14. Morawska Elżbieta, „Council of Europe standards and activities related to AI: towards a Framework Convention on AI and human rights?”, [w:] Artificial Intelligence and International Human Rights Law Developing Standards for a Changing World, red. Michał Balcerzak, Julia Kapelańska-Pręgowska. 25-44. Cheltenham: Elgar Publishing, 2024.
    Show in Google Scholar -->>
  15. Reiling Dory, „Courts and Artificial Intelligence” International Journal for Court Administration, nr 2 (2020): 4-6.
    Show in Google Scholar -->>
  16. Schmitz Anna, Michael Mock, Rebekka Görge et al., „A global scale comparison of risk aggregation in AI assessment frameworks” AI Ethics (2024). https://doi.org/10.1007/s43681-024-00479-6.
    Show in Google Scholar -->>
  17. Sroka Tomasz, „Artificial intelligence and the right to a fair trial”, [w:] Artificial Intelligence and International Human Rights Law Developing Standards for a Changing World, red. Michał Balcerzak, Julia Kapelańska-Pręgowska. 250-277. Cheltenham: Elgar Publishing, 2024.
    Show in Google Scholar -->>
  18. Stern Rachel E., „Automating Fairness? Artificial Intelligence in the Chinese Courts” Columbia Journal of Transnational Law, nr 3 (2021): 515-553.
    Show in Google Scholar -->>
  19. Świerczyński Marek, Zbigniew Więckowski, Sztuczna inteligencja w wymiarze sprawiedliwości. Wytyczne Rady Europy dotyczące spraw cywilnych, Warszawa: Wydawnictwo Instytut de Republica, 2023.
    Show in Google Scholar -->>
  20. Villasenor John, „Generative Artificial Intelligence and the Practice of Law: Impact, Opportunities, and Risks” Minnesota Journal of Law, Science & Technology, t. XXV (2024): 25-48.
    Show in Google Scholar -->>
  21. Wachter Sandra, Brent Mittelstadt, Chris Russell, „Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI” Computer Law & Security Review, t. XLI, 2021. https://doi.org/10.1016/j.clsr.2021.105567.
    Show in Google Scholar -->>
  22. Wang Peng, Guannan Qu, „AI in court: the promotion and regulation of information technology in China’s Smart Court movements”, [w:] Artificial Intelligence and International Human Rights Law Developing Standards for a Changing World, red. Michał Balcerzak, Julia Kapelańska-Pręgowska. 309-324, Cheltenham: Elgar Publishing, 2024.
    Show in Google Scholar -->>
  23. Więckowski, Zbigniew, Grzegorz Kubalski, „Czy sztuczna inteligencja oraz inne technologie informatyczne pomogą w dostępie do wymiaru sprawiedliwości osobom ze szczególnymi potrzebami?” Prawo i Więź, nr 4 (2022): 146-165. https://doi.org/10.36128/priw.vi42.546.
    Show in Google Scholar -->>
  24. Wiśniewski Adam, „Sztuczna inteligencja i prawa człowieka w kontekście prawa międzynarodowego” Prawo i Więź, nr 4(2023): 29-52.
    Show in Google Scholar -->>
  25. Yap Jia Qing, Ernest Lim, „A Legal Framework for Artificial Intelligence Fairness Reporting” The Cambridge Law Journal, nr 3 (2022): 610-644. doi:10.1017/S0008197322000460.
    Show in Google Scholar -->>

Downloads

Download data is not yet available.