Przejdź do głównego menu Przejdź do sekcji głównej Przejdź do stopki

Artykuły

Tom 56 Nr 3 (2025): Prawo i Więź nr 3 (56) 2025

Legal Assessment of Bias and Discrimination of AI Toolsin Higher Education and Research

Przesłane
25 marca 2024
Opublikowane
08-07-2025

Abstrakt

The use of artificial intelligence (AI) tools in higher education has become increasingly important because of the time and effort savings and the speed of information transfer. However, many ethical and legal challenges make their use in this field a complex issue. Problems such as bias and discrimination that arise from AI Tools require the establishment of a legal system capable of controlling their use in an optimal manner. However, the legal regulation of the use of AI Tools in higher education, especially in the fields of research and data analysis, does not reach the required level. Although many countries have begun to use these tools in higher education and scientific research, the legal framework is still not at the required level. This research attempts to explore the legal and ethical challenges of using AI in higher education and scientific research with the aim of focusing on the importance of developing a legal framework capable of promoting the use of AI Tools in the scientific and educational sectors. The paper highlights the most important relevant laws in technologically advanced countries in general to measure the extent to which they are reflected in reality.

Bibliografia

  1. Angwin Duncan N., Kamel Mellahi, Emanuel Gomes, Emmanuel Peter, “How communication approaches impact mergers and acquisitions outcomes” The International Journal of Human Resource Management, No. 20 (2016): 2370-2397.
    Pokaż w Google Scholar
  2. Bagnall Anthony, Lines Jason, Bostrom Aaron, Large James, Keogh Eamonn, “The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances” Data mining and knowledge discovery, 31 (2017): 606-660.
    Pokaż w Google Scholar
  3. Balazs Bodo, Natali Helberger, Kristina Irion, Frederik Zuiderveen Borgesius, Judith Moller, Bob van Velde de, Nadine Bol, Bram van Es, Claes de Vreese, “Tackling the algorithmic control crisis-the technical, legal, and ethical challenges of research into algorithmic agents” Yale Journal Law & Technology, 19 (2017): 133-180.
    Pokaż w Google Scholar
  4. Barocas Solon, Selbst D. Andrew, “Big data’s disparate impact” California Law Review, 104 (2016): 671-732.
    Pokaż w Google Scholar
  5. Bengio Yoshua, Ducharme Réjean, Vincent Pascal, “A neural probabilistic language model” Advances in neural information processing systems, 13 (2000).
    Pokaż w Google Scholar
  6. Bolukbasi Tolga, Chang Kai-Wei, Zou Y James, Saligrama Venkatesh, Kalai T. Adam, “Man is to computer programmer as woman is to homemaker? debiasing word embeddings” Advances in neural information processing systems, 29 (2016).
    Pokaż w Google Scholar
  7. Bordia Shikha, Samuel R. Bowman, “Identifying and reducing gender bias in wordlevel language models” arXiv preprint arXiv:1904.03035 (2019).
    Pokaż w Google Scholar
  8. Brown Ian, Christopher T. Marsden, Regulating code: Good governance and better regulation in the information age. Cambridge: MIT Press, 2023.
    Pokaż w Google Scholar
  9. Brusseau James, “AI human impact: toward a model for ethical investing in AIintensive companies” Journal of Sustainable Finance & Investment, No. 2 (2023): 1030-1057.
    Pokaż w Google Scholar
  10. Bryson John, Alessandro Sancino, John Benington, Eva Sørensen, “Towards a multiactor theory of public value co-creation” Public Management Review, No. 5 (2017): 640-654.
    Pokaż w Google Scholar
  11. Bubeck Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee et al. “Sparks of artificial general intelligence: Early experiments with gpt-4” arXiv preprint arXiv:2303.12712, (2023).
    Pokaż w Google Scholar
  12. Buolamwini Joy, Gebru Timnit, “Gender shades: Intersectional accuracy disparities in commercial gender classification” Proceedings of the 1st Conference on fairness, accountability and transparency, 81 (2018): 77-91.
    Pokaż w Google Scholar
  13. Caliskan Aylin, Joanna Bryson, Narayanan Arvind, “Semantics derived automatically from language corpora contain human-like biases” Science, No. 6334 (2017): 183-186.
    Pokaż w Google Scholar
  14. Chouldechova Alexandra, “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments” Big Data, No. 2 (2017): 153-163.
    Pokaż w Google Scholar
  15. Conneau Alexis, Guillaume Lample, Marc Aurelio Ranzato, Ludovic Denoyer, Hervé Jégou, “Word translation without parallel data” arXiv preprint arXiv:1710.04087 (2017).
    Pokaż w Google Scholar
  16. Daelman Charline, Katerina Yordanova, “AI through a human rights lens. The role of human rights in fulfilling AI’s potential”, [in:] Artificial intelligence and the law. 135-168. Cambriodge: Intersentia, 2023.
    Pokaż w Google Scholar
  17. Devlin Jacob, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, “Pre-training of deep bidirectional transformers for language understanding” arXiv preprint arXiv:1810.04805 (2018).
    Pokaż w Google Scholar
  18. Dixon Lucas, Li John, Sorensen Jeffrey, Thain Nithum, Vasserman Lucy, “Measuring and mitigating unintended bias in text classification”, [in:] Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 2018.
    Pokaż w Google Scholar
  19. Doshi-Velez Finale, Been Kim, “Towards a rigorous science of interpretable machine learning” arXiv preprint arXiv:1702.08608, (2017).
    Pokaż w Google Scholar
  20. Ferrara Emilio, “The history of digital spam” Communications of the ACM, No. 8 (2019): 82-91.
    Pokaż w Google Scholar
  21. Ferrara Emilio, “GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models” Journal of Computational Social Science, (2024): 1-21.
    Pokaż w Google Scholar
  22. Hamilton James D., “Why you should never use the Hodrick-Prescott filter” Review of Economics and Statistics, No. 5 (2018): 831-843.
    Pokaż w Google Scholar
  23. Holm Sune Hannibal, Kasper Lippert-Rasmussen, “Discrimination, Fairness, and the Use of Algorithms” Res Publica, No. 2 (2023): 177-183.
    Pokaż w Google Scholar
  24. Jawaid Shaikh Sonia, “Artificially intelligent, interactive, and assistive machines: A definitional framework for intelligent assistants” International Journal of Human–Computer Interaction, No. 4 (2023): 776-789.
    Pokaż w Google Scholar
  25. Khaitan Tarunabh, A theory of discrimination law. Oxford: Oxford Univeristy Press, 2015.
    Pokaż w Google Scholar
  26. Kleinberg Jon, Sendhil Mullainathan, Raghavan Manish, “Inherent trade-offs in the fair determination of risk scores” arXiv preprint arXiv:1609.05807, (2016).
    Pokaż w Google Scholar
  27. Koops Bert-Jaap, “Should ICT regulation be technology-neutral?”, [in:] Starting Points for ICT Regulation – Deconstructing Prevalent Policy One-liners, ed. Bert Jaap Koops et al. Den Haag: Asser, 2006.
    Pokaż w Google Scholar
  28. Kristian Lum, William Isaac, “To predict and serve?” Significance, No. 5 (2016): 14-19.
    Pokaż w Google Scholar
  29. Kroll David S., Harry Reyes Nieva, Arthur J. Barsky, Jeffrey A. Linder, “Benzodiazepines are prescribed more frequently to patients already at risk for benzodiazepine-related adverse events in primary care” Journal of General Internal Medicine, 31 (2016): 1027-1034.
    Pokaż w Google Scholar
  30. Mikolov Tomas, Sutskever Ilya, Chen Kai, Corrado S Greg and Dean Jeff, “Distributed representations of words and phrases and their compositionality” Advances in neural information processing systems (2013).
    Pokaż w Google Scholar
  31. O’Reilly-Shah Vikas, Gentry R. Katherine, Cleve Van Wil, Kendale M. Samir, Jabaley S. Craig, and Long R Dustin, “The COVID-19 pandemic highlights shortcomings in US health care informatics infrastructure: a call to action” Anesthesia & Analgesia, No. 2 (2020): 340-344.
    Pokaż w Google Scholar
  32. Omer Tene, Jules Polonetsky, “Taming the Golem: Challenges of ethical algorithmic decision-making” North Carolina Journal of Law & Technology, No. 1 (2017): 125-173.
    Pokaż w Google Scholar
  33. Pasquale Frank, The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press, 2015.
    Pokaż w Google Scholar
  34. Peck Tabitha C., Seinfeld Sofia, Aglioti M. Salvatore, Mel Slater, “Putting yourself in the skin of a black avatar reduces implicit racial bias” Consciousness and Cognition, No. 3 (2013): 779-787.
    Pokaż w Google Scholar
  35. Popenici Stefan A.D., Kerr Sharon, “Exploring the impact of artificial intelligence on teaching and learning in higher education” Research and practice in technology enhanced learning, No. 1 (2017).
    Pokaż w Google Scholar
  36. Prates Marcelo, Pedro H. Avelar, Luís C. Lamb, “Assessing gender bias in machine translation: a case study with google translate” Neural Computing and Applications, 32 (2020): 6363-6381.
    Pokaż w Google Scholar
  37. Radford Alec, Wu Jeffrey, Child Rewon, Luan David, Amodei Dario, Sutskever Ilya, “Language models are unsupervised multitask learners” OpenAI blog, No. 8 (2019).
    Pokaż w Google Scholar
  38. Rieke Aaron, Miranda Bogen, David G. Robins, Public scrutiny of automated decisions: Early lessons and emerging methods. Upturn and Omidyar Network, 2018.
    Pokaż w Google Scholar
  39. Rosamond Mitchell, Florence Myles, Emma Marsden, Second language learning theories. London: Routledge, 2019.
    Pokaż w Google Scholar
  40. Schick Timo, Hinrich Schütze, “It’s not just size that matters: Small language models are also few-shot learners” arXiv preprint arXiv:2009.07118 (2020).
    Pokaż w Google Scholar
  41. Sirsendu Jana, Michael R. Heaven, Charles B. Stauft, Tony T. Wang, Matthew C. Williams, Felice D’Agnillo, Abdu I. Alayash, “HIF-1α-Dependent metabolic reprogramming, oxidative stress, and bioenergetic dysfunction in SARS-CoV2-infected hamsters” International Journal of Molecular Sciences, No. 1 (2022).
    Pokaż w Google Scholar
  42. Yang Kai-Cheng, Ferrara Emilio, Menczer Filippo, “Botometer 101: Social bot practicum for computational social scientists” Journal of Computational Social Science, No. 2 (2022): 1511-1528.
    Pokaż w Google Scholar
  43. Zellers Rowan, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi, “Defending against neural fake news” Advances in neural information processing systems (2019).
    Pokaż w Google Scholar
  44. Zuiderveen Frederik Borgesius, Discrimination, artificial intelligence, and algorithmic decision-making. Council of Europe, Directorate General of Democracy, 2018.
    Pokaż w Google Scholar

Downloads

Download data is not yet available.