The ethical principles and regulatory frameworks that shape the global landscape of artificial intelligence (AI), situating the analysis within the context of leading international efforts to study and govern AI ethics
Main Article Content
Abstract
The swift evolution of artificial intelligence (AI) has become a central force driving digital transformation across numerous sectors, including finance, healthcare, education, and security. However, this progress also raises profound ethical and regulatory challenges. This study aims to analyze global approaches to AI ethics and governance through a comprehensive review of leading scholarly works. Floridi et al. (2018), through the AI4People initiative, proposed an ethical framework based on five core principles: promoting good, preventing harm, respecting autonomy, ensuring fairness, and maintaining transparency. These principles are designed to ensure that AI development serves the public good while upholding fundamental human values. Similarly, Mittelstadt et al. (2016) explored ethical concerns associated with computational systems, highlighting issues such as bias, discrimination, and lack of transparency in automated decision-making. Their findings suggest that without effective governance mechanisms, AI systems risk reinforcing existing social inequalities. Meanwhile, Cath (2018) approaches the issue from a legal and policy standpoint, identifying obstacles in national and global AI regulation, including differing country policies and the necessity for flexible yet robust legal frameworks. Based on these insights, this paper contends that AI ethics and governance should be viewed as deeply interrelated dimensions. Developing responsible AI requires a synthesis of ethical standards, legal frameworks, and technological innovation aimed at promoting collective well-being. Therefore, international organizations, governments, and academic institutions play a crucial role in establishing consistent global guidelines to ensure that AI advancement aligns with human-centered values.
Article Details
References
Citaristi, I. (2022). United Nations Educational, Scientific and Cultural Organization—UNESCO. In The Europa Directory of International Organizations 2022 (pp. 369–375). Routledge.
Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2021). How to design AI for social good: Seven essential factors. In Ethics, governance, and policies in artificial intelligence (pp. 125–151). Springer International Publishing. https://doi.org/10.1007/978-3-030-69978-9_7
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679
Organisation for Economic Co-operation and Development. (2021). OECD principles on artificial intelligence. https://www.oecd.org/going-digital/ai/principles/
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000373434
Yeung, K., Lodge, M., & others. (2023). AI governance by human rights frameworks. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3435011