Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-28T00:37:26.232Z Has data issue: false hasContentIssue false

Securing a peaceful, sustainable, and humane future through an international data-based systems agency (IDA) at the UN

Published online by Cambridge University Press:  20 December 2024

Peter G. Kirchschlaeger*
Affiliation:
University of Lucerne, Institute of Social Ethics ISE, Frohburgstrasse 3, 6002 Lucerne, Switzerland ETH Zurich and at the ETH AI Center, ETH Zurich, Switzerland

Abstract

The international community, and the UN in particular, is in urgent need of wise policies, and a regulatory institution to put data-based systems, notably AI, to positive use and guard against their abuse. Digital transformation and “artificial intelligence (AI)”—which can more adequately be called “data-based systems (DS)”—present ethical opportunities and risks. Helping humans and the planet to flourish sustainably in peace and guaranteeing globally that human dignity is respected not only offline but also online, in the digital sphere, and the domain of DS requires two policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA): IDA should be established at the UN as a platform for cooperation in the field of digital transformation and DS, fostering human rights, security, and peaceful uses of DS.

Type
Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Key theme

The urgent need for wise international policies and a UN institution to help put “AI” to positive use for humanity and provide guardrails against risks of their abuse.

Policy Significance Statement

This comment is significant for policymakers because it outlines concretely how to share the benefits, and manage the risks, of digital transformation and so-called “artificial intelligence (AI)” with two concrete policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA)—like the International Atomic Energy Agency (IAEA): IDA would serve as a platform for cooperation in the field of DS to promote the human rights, security, and nonoppressive use aspects of DS and as a global supervisory and approvals agency.

1. Introduction

Digital transformation and so-called “Artificial intelligence (AI)” present humanity and the planet with enormous ethical opportunities, as well as with ethical risks. The UN General Assembly recently adopted a resolution aiming for “safe, secure and trustworthy artificial intelligence systems” (United Nations General Assembly, 2024). It is now urgent to implement, and build on, the UN General Assembly resolution.

This comment suggests that an International Data-Based Systems Agency (IDA) needs to be established urgently at the UN as a global platform for technical cooperation in the field of data-based systems (DS), fostering human rights, safety, security, and peaceful uses of DS, as well as a global supervisory and monitoring institution and regulatory authority in the area of DS responsible for access to market approval.

2. Data-based systems (DS) rather than “Artificial Intelligence”

Technologies cannot perform as moral subjects or moral agents. Humans carry the ethical responsibility for machines. Humans must lay down ethical principles and ethical and legal norms; set a framework, goals, and limits of digital transformation; as well as define the use of machines in addition to examining, analyzing, evaluating, and assessing technology-based innovation from an ethical perspective.

The term “data-based systems (DS)” (Kirchschlaeger, Reference Kirchschlaeger2021) would be more appropriate than “artificial intelligence” because this term describes what actually constitutes “artificial intelligence”: generation, collection, and evaluation of data; data-based perception (sensory, linguistic); data-based predictions; and data-based decisions.

Pointing to its core characteristic, namely of being based on data and relying exclusively on data in all its processes, its own development, and its actions—more precisely its reactions to data—lifts the veil of the inappropriate attribution of the myth of “intelligence” covering substantial ethical problems and challenges of data-based systems. This allows more accurateness, adequacy, and precision in the critical reflection on data-based systems.

3. Ethical opportunities and risks of DS

“Data-based systems (DS)” comprise ethical opportunities and ethical risks. DS can be powerful, e.g., not only for fostering human dignity and sustainability but also for violating human dignity or for destroying the planet. Elon Musk has warned: “AI is far more dangerous than nukes [nuclear warheads]. So why do we have no regulatory oversight? This is insane” (Clifford, Reference Clifford2018).

Humans need to become active so that digital transformation and DS do not simply happen but that humans shape it. This is necessary so that digital transformation and DS will neither be reduced to an instrument serving pure efficiency increase but can rise to its ethical potential. More importantly, there is a need for normative guidance to review the economic self-interests that run digital transformation and DS so far almost exclusively (Zuboff, Reference Zuboff2019) and to guide calls for international regulations and governance in the digital domain and in the sphere of DS.

4. Existing global governance initiatives

Humanity and the planet are struggling with the enormous ethical and legal problems that digital transformation and the use of DS pose. Among others, there are constant violations of the human rights to privacy and data protection. Data are stolen from humans and sold to the highest bidder. The continuous disrespect of privacy and data protection represents a massive attack on the freedom of all humans.

Several declarations, recommendations, principles, and guidelines have contributed to a debate about the international governance of DS. The European Parliament and Council reached political agreement on the European Union’s Artificial Intelligence Act (‘EU AI Act’) (European Commission, 2024). The EU AI Act aims to represent a comprehensive legal framework for the regulation of AI systems across the EU, ensuring the safety of and the respect of fundamental rights by DS as well encouraging investment and innovation in the field of DS.

Other legal initiatives are also pursued in China and in the USA at the federal level, and several governments at the state level have released new regulations leading to the categorization of these activities as “American Market-Driven Regulatory Model”, as “Chinese State-Driven Regulatory Model”, and as “European Rights-Driven Regulatory Model” (Bradford, Reference Bradford2023).

The rapidly developing technical possibilities for disinformation and manipulation of people through large language models such as “Chat GPT” open new horizons in this regard. At the same time, quality journalism as a pillar of democracy will come under even greater economic and political pressure. This is because media channels can be filled with texts from “Chat GPT” at low cost. Moreover, economic manipulation affects humans as consumers. DS know exactly—to use a metaphor—which piano keys it must hit to make the music play, in other words, to make humans shop the way it wants humans to.

Another set of security risks concerns the mental health of children and young people due to the impact of social media as well as for physical health and for the lives of all of us because of the existential consequences of DS-based cyber-attacks and military applications of DS for global peace and security.

Allowing humans and the planet to flourish sustainably and guaranteeing globally that human dignity is respected not only offline but also online and in the digital sphere as well as in the domain of DS, the following concrete measures are proposed.

5. Human rights-based data-based systems HRBDS

Human rights as an ethical frame of reference could provide, as a minimum, the necessary normative guidance. Human rights offer the major benefit of being based on a simple concept and focusing on the essentials. Besides the ethical justifiability of human rights and their universality (Kirchschlaeger, Reference Kirchschlaeger2013a), they define the minimum standards guaranteeing that all humans—always, everywhere—can physically survive and lead a life with dignity—a life worth living. They also encourage and foster innovation by protecting people’s freedom to think, express their opinion, and access information, as well as promote pluralism by respecting each person’s right to self-determination.

Based on these considerations, we should strive for human rights-based design, development, production, use of data-based systems, and nonuse of data-based systems based on human rights concerns—we need human rights-based data-based systems HRBDS (Kirchschlaeger, Reference Kirchschlaeger2013b, Reference Kirchschlaeger2021) including a precautionary approach, the reinforcement of existing human rights instruments specifically for data-based systems, and the promotion of algorithms supporting and furthering the realization of human rights (Quintavalla/Temperman, 2023).

HRBDS means—in other words—that human rights are respected, protected, implemented, and realized within the entire life cycle of DS and the complete value-chain process of DS.

6. International data-based systems agency IDA

In order to implement and realize HRBDS as regulatory framework serving the humane and sustainable future of humanity and the planet, an International Data-Based Systems Agency (IDA)—analogous to the International Atomic Energy Agency (IAEA) needs to be established at the UN. It would be a platform for technical cooperation in the field of digital transformation and DS for state and nonstate actors (including of course the private sector, civil society, and organizations and institutions active in this field). Integrated in or associated with the UN, it should work for the safe, secure, and peaceful uses of data-based systems, contributing to international peace and security, the respect and realization of human rights, and the United Nations’ Sustainable Development Goals as a global supervisory and monitoring institution and regulatory authority in the area of DS responsible for access to market approval. Its global and inclusive approach will permit it to master the risk of fragmentation in the field.

IDA could be built following the model of the International Atomic Energy Agency IAEA (IAEA, 2011) as an “institution with teeth” because thanks to its legal powers, functions, enforcement mechanisms and instruments, the IAEA was able to foster innovation and ethical opportunities while at the same time protecting humanity and the planet from the existential risks in the domain of nuclear technologies (IAEA, 2013) which also embrace the same dual nature as DS covering both ethical upsides and downsides.

More and stricter commitment to the legal framework is necessary as well as regulation that is precise, goal-oriented, and strictly enforced. The IDA would serve this necessity. In this way, regulation may also be advantageous economically.

Compared to other models for global governance of DS—the model of the Intergovernmental Panel on Climate Change (IPCC) (Carnegie Council for Ethics in International Affairs, 2023) and the model of the International Civil Aviation Organization (ICAO) (Baker McKenzie, Reference McKenzie2023)—, IDA promises to reach the precision, the goal orientation, and the strict enforcement not only necessary to guarantee the flourishing of humanity and the planet from an ethical standpoint but also to foster innovation from an economic point of view.

What makes the establishment of an IDA realistic is not only its essential and minimum normative framework, its practice-oriented and participatory governance structure as well as its striving for legitimacy combined with fostering innovation but also that in the past, humanity has shown that when the well-being of people and the planet is at stake, humanity can focus on what is technically feasible rather than blindly pursuing all that is technically possible.

The legal basis for the establishment of IDA should be a UN resolution elaborating and adopting the text of the Statute of IDA constituting the following elements of IDA:

  1. a. Purpose: The purpose of IDA would be—as defined above—to be a platform for technical cooperation in the field of digital transformation and DS, fostering human rights, safety, security, and peaceful uses of DS, and to act as a global supervisory and monitoring institution and regulatory authority responsible for access to market approval partnering with and supporting on a global level the work of the national regulatory authorities in the area of digital transformation and DS. It should foster the safe, secure, and peaceful uses of data-based systems, contributing to international peace and security, to the respect and realization of human rights, and the United Nations’ Sustainable Development Goals.

  2. b. 30 IDA-principles (please see appendix)

  3. c. Legal Status of IDA (please see the appendix)

  4. d. Membership of IDA (please see the appendix)

  5. e. Rights and Responsibilities of IDA (please see the appendix)

  6. f. Mechanisms, Measures and Instruments of IDA (please see the appendix)

  7. g. Governance of IDA (please see the appendix)

7. Broad global support for IDA

Besides a growing international and interdisciplinary network of experts which calls for the establishment of HRBDS and IDA (IDA, 2024), the Elders—an independent group of world leaders founded by Nelson Mandela that includes former UN Secretary General Ban Ki-moon and Ireland’s first female President Mary Robinson—recently endorsed the concrete recommendations for human rights-based DS and a global agency to monitor them and called upon the UN to take appropriate action. In their statement of May 31, 2023, the Elders took two specific suggestions for action from the book “Digital Transformation and Ethics. Ethical Considerations on the Robotization and Automation of Society and the Economy and the Use of Artificial Intelligence” (Kirchschlaeger, Reference Kirchschlaeger2021)— “human rights-based data-based systems” and above all the creation of an “International Data-Based Systems Agency IDA” at the UN following the model of the International Atomic Energy Agency IAEA.

Thus, the Elders declared that: “A new global architecture is needed to manage these powerful technologies within robust safety protocols, drawing on the model of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency. These guardrails must ensure AI is used in ways consistent with international law and human rights treaties. AI’s benefits must also be shared with poorer countries. No existing international agency has the mandate and expertise to do all this. The Elders now encourage a country or group of countries to request as a matter of priority, via the UN General Assembly, that the International Law Commission draft an international treaty establishing a new international AI safety agency” (The Elders, 2023).

The idea of a human rights-based and legally binding regulatory framework as well as the establishment of an institution enforcing the global regulation enjoys the support of Pope Francis (Pope Francis, Reference Francis2024).

UN Secretary General António Guterres also supports a human rights-based and legally binding regulatory framework and the creation of an international AI watchdog body like the International Atomic Energy Agency (IAEA): “I would be favorable to the idea that we could have an artificial intelligence agency (…) inspired by what the international agency of atomic energy is today.” (Guterres, Reference Guterres2023a) In the UN Security Council on July 18, 2023, he has called for a new UN body like IDA to tackle threats posed by artificial intelligence (Guterres, Reference Guterres2023b).

UN High Commissioner for Human Rights Volker Türk has demanded “urgent action” and proposed human rights-based HRBDS and a coordinated global response towards an institutional solution like the creation of an “International Data-Based Systems Agency IDA” in his most recent statement about AI and human rights on July 12, 2023 (Türk, Reference Türk2023).

The UN Human Rights Council unanimously adopted, on July 14, 2023, its latest resolution on “New and emerging digital technologies and human rights” (UN Human Rights Council, 2023) which included for the first time an explicit reference to AI and the promotion and protection of human rights. The Resolution emphasizes that new and emerging technologies with an impact on human rights “may lack adequate regulation” highlighted the “need for effective measures to prevent, mitigate and remedy adverse human rights impacts of such technologies” and stressed the need to respect, protect, and promote human rights “throughout the lifecycle of artificial intelligence systems”. It called for frameworks for impact assessments related to human rights, for due diligence to assess, prevent, and mitigate adverse human rights impact, and to ensure effective remedies, human oversight, and accountability.

On March 21, 2024, the UN General Assembly unanimously adopted a resolution “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development” (UN General Assembly, 2024) on the promotion of “safe, secure and trustworthy” “artificial intelligence (AI)” systems that will also benefit sustainable development for all. It emphasizes: “The same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems.”

Also, some voices from multinational technology companies—among others, Sam Altman (Founder of OpenAI that developed ChatGPT)—have called for IDA (Euronews, 2023; Santelli, Reference Santelli2024).

Now is the time.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/dap.2024.38.

Author contribution

Peter G. Kirchschlaeger (Full Professor of Theological Ethics and Director of the Institute of Social Ethics ISE at the Faculty of Theology of the University of Lucerne as well as Visiting Professor at the Chair for Neuroinformatics and Neural Systems at ETH Zurich as well at the ETH AI Center) is the only author of this comment.

Data availability statement

The data that support the findings of this study are openly available in the publications included in the list of references.

Funding statement

This work received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

The author declares none.

Disclosure statement

Not applicable.

References

McKenzie, Baker (2023) International: Can a global framework regulate AI Ethics? Insight Plus, 08 November. https://insightplus.bakermckenzie.com/bm/investigations-compliance-ethics/international-can-a-global-framework-regulate-ai-ethics (accessed 24 April 2024).Google Scholar
Bradford, A (2023) Digital Empires. The Global Battle to Regulate Technology. Oxford University Press, pp. 35145.CrossRefGoogle Scholar
Carnegie Council for Ethics in International Affairs (2023) Envisioning Modalities for AI Governance: A Response from AIEI to the UN Tech Envoy. Artificial Intelligence & Equality Initiative. https://www.carnegiecouncil.org/media/article/envisioning-modalities-ai-governance-tech-envoy#gaio (accessed 24 April 2024).Google Scholar
Clifford, C (2018) Elon Musk: Mark my words – A.I. is far more dangerous than nukes. CNBC, 13 March. https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html (accessed 24 April 2024).Google Scholar
Euronews (2023) OpenAI’s Sam Altman calls for an international agency like the UN’s nuclear watchdog to oversee AI. https://www.euronews.com/next/2023/06/07/openais-sam-altman-calls-for-an-international-agency-like-the-uns-nuclear-watchdog-toover (accessed 24 April 2024).Google Scholar
European Commission (2024) AI Act. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai (accessed 24 April 2024).Google Scholar
Guterres, A (2023a): UN Chief Backs Idea of Global AI Watchdog Like Nuclear Agency. June 2023. Online: https://www.reuters.com/technology/un-chief-backs-idea-global-ai-watchdog-like-nuclear-agency-2023-06-12/ (accessed 24 April 2024). Online: https://press.un.org/en/2023/sgsm21832.doc.htm (accessed 24 April 2024).Google Scholar
Guterres, A (2023b): Secretary-General Urges Security Council to Ensure Transparency, Accountability, Oversight, in First Debate on Artificial Intelligence. https://press.un.org/en/2023/sgsm21880.doc.htm (accessed 24 April 2024).Google Scholar
IDA. (2024) Supporters of IDA. https://idaonline.ch/supporters-of-ida/ (accessed 24 April 2024).Google Scholar
International Atomic Energy Agency (IAEA) (2011) The international legal framework for nuclear security. https://www.iaea.org/publications/8565/the-international-legal-framework-for-nuclear-security (accessed 24 April 2024).Google Scholar
International Atomic Energy Agency (IAEA) (2013) IAEO Basiswissen. Den Beitrag nuklearer Technik zur Gesellschaft maximieren und ihre friedliche Verwendung verifizieren. Vienna: International Atomic Energy Agency.Google Scholar
Kirchschlaeger, PG (2013a) Wie können Menschenrechte begründet werden? Ein für religiöse und säkulare Menschenrechtskonzeptionen anschlussfähiger Ansatz . ReligionsRecht im Dialog 15. Muenster: LIT-Verlag.Google Scholar
Kirchschlaeger, PG (2013b) Human Rights as an Ethical Basis for Science. Journal of Law, Information and Science 22(2), 117.Google Scholar
Kirchschlaeger, PG (2021) Digital Transformation and Ethics. Ethical Considerations on the Robotization and Automation of Society and the Economy and the Use of Artificial Intelligence. Baden-Baden: Nomos.Google Scholar
Francis, Pope (2024) Artificial Intelligence and Peace. Message of Pope Francis for the 57th World Day of Peace. 1 January 2024. https://www.vatican.va/content/francesco/en/messages/peace/documents/20231208-messaggio-57giornatamondiale-pace2024.html (accessed 24 April 2024).Google Scholar
Quintavalla A/Temperman J (eds) (2023) Artificial Intelligence and Human Rights. Oxford: Oxford University Press.Google Scholar
Santelli, F (2024) Sam Altman: In pochi anni l’IA sarà inarrestabile, serve un’agenzia come per l’energia atomica. La Repubblica, 18 January. https://www.repubblica.it/economia/2024/01/18/news/sam_altman_in_pochi_anni_lia_sara_inarrestabile_serve_unagenzia_come_per_lenergia_atomica-421905376/amp/ (accessed 24 April 2024).Google Scholar
The Elders (2023) The Elders urge global co-operation to manage risks and share benefits of AI. https://theelders.org/news/elders-urge-global-co-operation-manage-risks-and-share-benefits-ai (accessed 24 April 2024).Google Scholar
Türk, V (2023) Artificial intelligence must be grounded in human rights, says High Commissioner. https://www.ohchr.org/en/statements/2023/07/artificial-intelligence-must-be-grounded-human-rights-says-high-commissioner (accessed 24 April 2024).Google Scholar
United Nations General Assembly (2024) Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development. 24 March. https://daccess-ods.un.org/access.nsf/Get?OpenAgent&DS=A/78/L.49&Lang=E (accessed 24 April 2024).Google Scholar
United Nations Human Rights Council (2023) Resolution New and emerging digital technologies and human rights. No. 41/11. 13 July. https://www.ohchr.org/en/hr-bodies/hrc/advisory-committee/digital-technologiesand-hr (accessed 24 April 2024).Google Scholar
Zuboff, S (2019) The Age of Surveillance Capitalism. The Fight for a Human Future at the New Frontier of Power. London: PublicAffairs.Google Scholar
Supplementary material: File

Kirchschlaeger supplementary material

Kirchschlaeger supplementary material
Download Kirchschlaeger supplementary material(File)
File 28.5 KB
Submit a response

Comments

No Comments have been published for this article.