1. Introduction
On May 2, 2024, Japanese Prime Minister Kishida Fumio announced the launch of the “Hiroshima AI Process Friends Group” at the Meeting of the Council at Ministerial Level of the Organisation for Economic Co-operation and Development (OECD).Footnote 1 This initiative, supported by 49 countries and regions – primarily OECD members – aims to foster international cooperation for ensuring global access to safe, secure, and trustworthy generative artificial intelligence (AI).Footnote 2
The Hiroshima AI Process Friends Group has supported the implementation of international guidelines as stipulated in the Hiroshima AI Process Comprehensive Policy Framework (Comprehensive Framework).Footnote 3 Endorsed by the G7 Digital and Tech Ministers on December 1, 2023, the Comprehensive Framework represents the first policy package agreed upon by the democratic leaders of the G7 to effectively steward the principles of human-centered AI design, safeguard individual rights, and enhance trust-based systems throughout the AI lifecycle. This milestone sends a promising signal of international alignment on the responsible development of AI.Footnote 4 Notably, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (HCOC),Footnote 5 established as an integral part of the Comprehensive Framework, builds upon and aligns closely with existing policies across G7 members.Footnote 6
The G7 has emphasized that the principles are living documents,Footnote 7 providing them with significant potential yet to be realized, as well as remarkable questions lying ahead: How does the Hiroshima AI Process (HAIP) contribute to achieving interoperability of international rules on advanced AI models? How can it add value beyond other international collaborations on AI governance?Footnote 8 How can the G7, as a democratic referent, leverage its position as a leading advocate for responsible AI to encourage broader adoption of its governance principles, even in regions with differing political or cultural contexts?
To answer these questions, this article (1) provides a brief overview of the history of AI governance and relevant instances of international cooperation; (2) analyzes the structure and content of the HAIP, with specific focus on the HCOC; (3) examines how the HCOC fits into the international tapestry of AI governance, particularly within the context of G7 nations, and how it can foster regulatory interoperability on advanced AI systems; and (4) identifies and discusses prospective areas of focus for the future development of the HCOC.
2. AI governance: A historical overview and international initiatives
2.1 A short history of AI governance
Following the deep-learning breakthroughs of the early 2010s, AI adoption surged across a myriad of industries and sectors (Brynjolfsson & McAfee, Reference Brynjolfsson and McAfee2014; LeCun et al., Reference LeCun, Bengio and Hinton2015; Bharadiya et al., Reference Bharadiya, Thomas and Ahmed2023). This rapid integration process brought to light a multitude of potential risks associated with deploying AI. From fatal accidents involving autonomous vehiclesFootnote 9 to discriminatory hiring practices by AI algorithms (Andrews & Bucher, Reference Andrews and Bucher2022), the real-world consequences of AI development have become increasingly evident. Furthermore, the manipulation of financial markets by algorithmic trading and the spread of misinformation on social media platforms (Ferrara, Reference Ferrara2024) highlight the broader societal concerns surrounding the technology’s integration across sectors.
Fueled by growing awareness of AI risks in the mid-and-late 2010s, national governments (including G7 members), international organizations, tech companies and nonprofits launched a wave of policy and principle publications. Prominent examples include the “Ethics Guidelines for Trustworthy AI” by the European Union (EU) in 2019,Footnote 10 the “Recommendation of the Council on Artificial Intelligence” by the OECD in 2019 (updated in 2024),Footnote 11 and the “Recommendation on the Ethics of Artificial Intelligence” by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2021.Footnote 12 These publications emphasized pairing AI development with core values such as human rights, democracy and sustainability as well as key principles including fairness, privacy, safety, security, transparency and accountability.
While fundamental values and AI principles provide a crucial foundation to AI governance, translating them into implementable standards for AI systems remains a challenge, and addressing this challenge requires concrete and material guidance. Various initiatives have been undertaken at different levels to bridge this gap. At the national level, examples include the “AI Risk Management Framework”Footnote 13 (RMF) published by the National Institute of Standards and Technology (NIST) in the United States in January 2023, and Japan’s “AI Guidelines for Business” published by several ministries in April 2024.Footnote 14 On a supranational scale, leading examples include the 2023 AI Safety Summit’s “Emerging Processes for Frontier AI Safety”Footnote 15 and the G7’s HCOC – the latter being the focus of this article. Additionally, nongovernmental organizations such as the International Organization for Standardization (ISO) have contributed by issuing international standards on AI governance. The “AI Management System Standard ISO/IEC 42001”Footnote 16 was published in December 2023, specifying AI management system requirements. Another notable contribution to the risk management and stakeholder engagement field is the “Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems” (HUDERAF), proposed by the Alan Turing Institute to the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (Leslie et al., Reference Leslie, Burr, Aitken, Cowls, Katell and Briggs2022). Collectively, these diverse approaches underscore the ongoing efforts to transform abstract AI principles into a practical and implementable reality.
Despite this common direction, many published guidelines and principles for responsible AI development lack legally binding force, making them examples of “soft law.” While compliance with these documents helps companies with risk prevention strategies and forward-looking accountability measures, there are no guarantees or enforceability measures to ensure adherence to these standards. Thus, to advance stronger commitment to AI governance – in particular, addressing AI systems that pose high risks – there has been active movement to introduce regulations with legally binding force. For instance, the European Commission introduced the draft AI Act in 2021 (subsequently published in the Official Journal of the European Union on July 12, 2024),Footnote 17 focusing most of its compliance requirements on high-risk systems and even banning certain systems when the risks they present are deemed unacceptable.Footnote 18 Similarly, in 2022, Canada presented a legislative proposal, the Artificial Intelligence and Data ActFootnote 19 (AIDA), which focuses on establishing compliance requirements for high-impact AI applications. The United States has also seen a surge in legislative activity targeted at AI. As of August, 2024, over 105 draft bills have been introduced addressing AI,Footnote 20 over 35 of which specifically target risk mitigation in AI applications.
The 2023 boom in foundation models presents a new layer of complexity to the already challenging landscape of AI governance. While conventional AI has faced issues such as limited explainability, diverse stakeholders and rapid evolution, foundation models expand the scope and reach of these challenges (Bommasani et al., Reference Bommasani, Liang, Hudson, Adeli, Altman, Arora, Von Arx, Bernstein, Bohg, Bosselut, Brunskill, Brynjolfsson, Buch, Card, Castellon, Chatterji, Chen, Creel, Davis, Demszky, Donahue, Doumbouya, Durmus, Ermon, Etchemendy, Ethayarajh, Fei-Fei, Finn, Gale, Gillespie, Goel, Goodman, Grossman, Guha, Hashimoto, Henderson, Hewitt, Ho, Hong, Hsu, Huang, Icard, Jain, Jurafsky, Kalluri, Karamcheti, Keeling, Khani, Khattab, Koh, Krass, Krishna, Kuditipudi, Kumar, Ladhak, Lee, Lee, Leskovec, Levent, Li, Li, Ma, Malik, Manning, Mirchandani, Mitchell, Munyikwa, Nair, Narayan, Narayanan, Newman, Nie, Niebles, Nilforoshan, Nyarko, Ogut, Orr, Papadimitriou, Park, Piech, Portelance, Potts, Raghunathan, Reich, Ren, Rong, Roohani, Ruiz, Ryan, Ré, Sadigh, Sagawa, Santhanam, Shih, Srinivasan, Tamkin, Taori, Thomas, Tramèr, Wang, Wang, Wu, Wu, Wu, Xie, Yasunaga, You, Zaharia, Zhang, Zhang, Zhang, Zhang, Zheng and Zhou2021).Footnote 21 The application of these systems in countless contexts and their ease of operation create an even more intricate risk environment. As a result, there has been a surge in global efforts to establish rules and foster international cooperation around foundation models. The EU AI Act,Footnote 22 for example, has incorporated provisions specifically related to “general-purpose AI” systems.Footnote 23 Japan’s Liberal Democratic Party proposed the concept note for the Basic Law for the Promotion of Responsible AI in February 2024,Footnote 24 which targets advanced foundational AI models with significant societal impact. Similarly, the Chinese government implemented the Interim Measures for the Administration of Generative Artificial Intelligence ServicesFootnote 25 in August 2023, establishing specific requirements for models with “public opinion properties or the capacity for social mobilization.”Footnote 26 Figure 1 shows the overall structure of AI governance and key documents related to each layer of governance.

Figure 1. Overall structure of AI governance and key documents related to each layer.
The brief history of AI governance is characterized by a complex and multidimensional balancing act between innovation and regulation, rapidly advancing technology, and the integration of multivector interests – encompassing the technology industry, the general public and regulators. While these groups may have differing priorities, there is also growing recognition of the need for collaboration. Responses to AI risks have evolved: Nations and international bodies initially relied on soft-law principles and public–private collaborative efforts, whereas the current momentum is toward binding legislative action, with specific measures addressing advanced AI. Another crucial distinction is the regulatory scope, which can be generally categorized as comprehensive or sectorial. While the EU’s AI Act and Canada’s proposed AIDA encompass regulations that span across industries, Japan, the United Kingdom (UK) and the United States have indicated a policy direction that considers industry-specific AI regulations or focuses on powerful foundational models. Nonetheless, the regulatory emphasis in all of these instances is primarily on high-risk AI, aiming to strike an appropriate stability between fostering technological development and ensuring safety. G7 democracies, in particular, find common ground in core principles such as human rights and democratic values, grounding them in transparency, explainability and bias prevention and forming a common foundation for responsible AI development.Footnote 27
2.2 Advancing international collaboration
This following subsection first (1) provides an overview of key international AI governance initiatives, including significant documents and declarations such as the G7’s HAIP Comprehensive Framework, the Bletchley Declaration, the UN’s “Governing AI for Humanity” report, and the Council of Europe’s AI Treaty. These documents highlight various efforts to establish global standards and frameworks for AI governance. Subsequently, (2) the discussion will examine the pivotal role of the G7’s framework in shaping global AI governance. The G7’s role in global AI policies is underscored by its active participation in major international initiatives and its significant economic, regulatory, and technological impact.
2.2.1 Key international AI initiatives
As countries make progress with AI rulemaking within their borders, international cooperation is also advancing.Footnote 28 The G7 is one of the most impactful forums for such international coordination. During the May 2023 summit, G7 leaders committed to establishing the HAIP by the end of the year to foster collaborative policy development on generative AI.Footnote 29 Within 6 months, the G7 digital and tech ministers had delivered the Comprehensive Framework.Footnote 30 This framework prioritizes proactive risk management and governance, transparency and accountability across the AI life cycle.Footnote 31 Additionally, it emphasizes anchoring AI development in human rights and democratic values while fostering the use of advanced AI for tackling global challenges such as climate change, health care, and education.Footnote 32
In November 2023, the AI Safety Summit held in the UK produced the “Bletchley Declaration,” a significant milestone in international AI collaboration.Footnote 33 The declaration addresses crucial aspects of AI governance, such as the protection of human rights, transparency, explainability, fairness, accountability, human oversight, bias mitigation, and privacy and data protection.Footnote 34 Additionally, it highlights the risks associated with manipulating or generating deceptive content.Footnote 35 Endorsed by 29 countries and regions, the signatories encompass not only G7 and OECD nations but also partners from the Middle East, Africa, South America, Asia, and, notably, China.Footnote 36 A second AI Safety Summit was held in Seoul in May 2024,Footnote 37 which reiterated the anchoring point of safety, and highlighted inclusion and innovation as critical priorities for global convergence.Footnote 38
The United Nations is also active in forming international AI governance principles. In December 2023, the UN AI Advisory Body issued the interim report “Governing AI for Humanity.”Footnote 39 The report outlines a set of guiding principles and institutional roles designed to create a global AI governance framework, proposing essential considerations and actions to ensure that AI development and deployment serve the broader interests of humanity.Footnote 40 These include principles such as inclusivity,Footnote 41 public interestFootnote 42 and the importance of aligning AI governance with data governance and promoting a data commons.Footnote 43 Institutional functions highlighted in the report include assessing the future directions and implications of AIFootnote 44; developing and harmonizing standards,Footnote 45 safety and RMFsFootnote 46; and facilitating the development, deployment, and use of AI for economic and societal benefit through international multi-stakeholder cooperation.Footnote 47
In March 2024, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence introduced the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (AI Treaty), a groundbreaking treaty on AI governance that sets a high bar on responsible AI development.Footnote 48 The AI Treaty, adopted in May 2024,Footnote 49 emphasizes the obligation of signatory nations (parties to the convention) to proactively ensure AI activities are aligned with human rights, democratic integrity and the rule of law. The treaty calls for comprehensive safeguards throughout the AI life cycle – including mechanisms for accountabilityFootnote 50 and transparencyFootnote 51 – and introduces comprehensive RMFs.Footnote 52 Furthermore, it calls for robust remedies and procedural protective measures against rights violations,Footnote 53 promotes rigorous risk and impact assessments,Footnote 54 and delineates duties for international cooperation and implementation, focusing on nondiscrimination and rights protection.Footnote 55
Nations participating in these initiatives vary. Figure 2 maps the structural involvement of various jurisdictions in the abovementioned international processes.

Figure 2. The global AI governance landscape.
2.2.2 The importance of the G7’s AI governance framework
Figure 2 shows why and how the G7 HAIP has significance in global rulemaking on advanced AI systems. First, the G7 nations participate in all significant initiatives mentioned previously – namely the AI Safety Summit, the UN AI Advisory Body and the AI Treaty. Second, the G7 represents a group of nations with significant economic, regulatory and technological impact and leadership. In 2023, the GDP of the G7 countries (excluding the EU, which is a nonenumerated member) accounts for approximately 26.4 percentFootnote 56 of the global total.Footnote 57 Moreover, most global companies developing advanced AI systems are based in one of the G7 member countries.Footnote 58 Establishing interoperable rules for advanced AI systems in these countries is crucial to avoid duplicate compliance costs and to facilitate innovation on a global scale. Third, the G7 is a group of democratic nations, which sets it apart from institutions that include nondemocratic states as members, such as the United Nations and the AI Safety Summit.Footnote 59 The HAIP will likely serve as a key foundation, not just for safety but also for realizing fundamental values such as human rights, democracy, and the rule of law in the development and implementation of advanced AI systems.
3. Analyzing the Hiroshima AI Process Comprehensive Framework
3.1 Structure of the Comprehensive Framework
In response to the rapid development and global spread of advanced AI, the G7 nations launched the HAIP in May 2023 under Japan’s presidency.Footnote 60 This international forum aims to establish common ground for responsible AI development and use. It focuses on fostering safe, secure and trustworthy AI by addressing key ethical issues, promoting collaboration on research and development, and encouraging international standards for a future where humanity benefits from AI advancements. Although the HAIP focuses on governance of advanced AI systems, the Comprehensive Framework avoids a rigid definition of this technology by providing the tentative definition of “the most advanced AI systems, including the most advanced foundation models and generative AI systems.”Footnote 61 This flexibility likely reflects a desire to adapt to future advancements in AI performance, functionalities, and deployment landscapes.
The Comprehensive Framework consists of four elements (see Figure 3). First, the OECD’s “G7 Hiroshima Process on Generative Artificial Intelligence”Footnote 62 serves as a background analysis of the opportunities and risks of advanced AI systems. Second, the “Hiroshima Process International Guiding Principles for All AI Actors”Footnote 63 (HIGP) provides 12 general principles for designing, developing, deploying, providing and using advanced AI systems without providing detailed guidance. Third, the HCOCFootnote 64 consists of a set of detailed instructions for the developers of advanced AI systems under the general principles the HIGP provides. Finally, the “project-based cooperation” on AI includes international collaborations in areas such as content authentication and the labeling of AI-generated content.

Figure 3. Four elements of the HAIP Comprehensive Framework.
The following section summarizes the contents of the HIGP and HCOC.
3.2 Contents of the HIGP
The HIGP is a comprehensive set of values and best practices promoting responsible development and use of advanced AI on a global scale. It consists of 12 core principles that serve as a foundation for responsible AI governance. These principles closely mirror the values and approaches that G7 nations are already exploring for their individual AI governance frameworks.Footnote 65 The analysis here suggests that the 12 principles may be divided into the following three groups (see Table 1):
1. Risk management and governance: This group recommended actions to assess and mitigate risks associated with AI systems, ensuring they are reduced to a level that relevant stakeholders deem acceptable.
2. Stakeholder engagement: This group recommended actions to ensure clear communication with and accountability to all relevant stakeholders.
3. Ethical and societal considerations: This group recommended actions to ensure the development, deployment and usage of AI are in alignment with ethical standards and societal values.
Table 1. Elements of the Hiroshima Process International Guiding Principles

Note: The numerals listed for each item correspond to the articles assigned in the HIGP and HCOC. The authors devised the abbreviations for the principles and their categorization.
3.3 Overview of the Code of Conduct
Building on 11 of the HIGP’s 12 core principles (excluding the trustworthy and responsible use of advanced AI),Footnote 66 the HCOC translates these principles and materializes them into a more specific code of practice for organizations developing and deploying advanced AI systems. The HCOC provides a comprehensive road map for AI processes and risk mitigation, outlining general actionable items on the matters of risk management and governance, stakeholder engagement, and ethical considerations.Footnote 67
3.3.1 Risk management and governance
The HCOC emphasizes in items 1, 2, 5, 6, 7 and 11 the importance of comprehensive risk management for organizations developing advanced AI across the life cycle of development and implementation. These practices include the following:
● Risk identification and mitigation: implementing rigorous testing throughout the AI life cycle, such as red-teaming, to identify and address potential safety, security, and trustworthiness issues
● Vulnerability and misuse management after deployment: post-deployment monitoring for vulnerabilities and misuse, with an emphasis on enabling third-party and user vulnerability reporting, possibly via bounty systems
● Governance and risk management: creating transparency about organizations’ governance and risk management policies and regularly updating users on privacy and mitigation measures
● Security investments: implementing robust security measures throughout the AI life cycle to protect critical system components against threats
● Content authentication: developing content authentication methods (e.g., watermarking) to help users identify AI-generated content
● Data quality, personal data and intellectual property protection: prioritizing data integrity, addressing bias in AI, upholding privacy and respecting intellectual property, and encouraging alignment with relevant legal standards
3.3.2 Stakeholder engagement
The HCOC highlights in items 3 and 4 the critical role of transparency and multistakeholder engagement:
● Transparency and accountability: emphasizing public transparency for organizations developing advanced AI, including reporting on both the capabilities of AI systems and their limitations
● Responsible information sharing: encouraging organizations to share information on potential risks, incidents, and best practices with each other, including industry, governments, academia and the public
3.3.3 Ethical and societal considerations
The HCOC establishes in items 8–10 a series of parameters to ensure AI is developed and deployed within the boundaries of human rights and democracy to address global challenges:
• Research prioritization for societal safety: emphasizing collaborative research to advance AI safety, security and trustworthiness, focusing on key risks such as upholding democratic values, respecting human rights and protecting vulnerable groups
• AI for global challenges: prioritizing development of advanced AI systems to address global challenges such as climate change, health and education, aligning with the UN Sustainable Development Goals
• International technical standards: encouraging contribution to the development and use of international technical standards, including practices to promote transparency by allowing users to identify AI-generated content (e.g., watermarking), testing methodologies and cybersecurity policies
A detailed summary of the HCOC is presented in Table 2.
Table 2. Summary of the Hiroshima Process International Code of Conduct

Note: The numerals listed for each item correspond to those assigned in the HIGP and HCOC. The authors devised the abbreviations for the principles and their categorization.
4. The potential of the Hiroshima Code of Conduct: Toward interoperable frameworks for advanced AI systems
The HCOC, as articulated in the Comprehensive Framework, serves as a pivotal instrument to enhance interoperability between various AI governance frameworks.Footnote 68 But how compatible is the HCOC with the regulatory frameworks of G7 members? What are the mechanisms or functionalities that make this interoperability possible? Firstly, the HCOC (and similar voluntary codes of conduct) can operate as a potent, nonbinding ‘common guidance.’ Although not legally enforceable, the gravitas and direction of these documents can wield significant practical influence as soft law (Guzman & Meyer, Reference Guzman and Meyer2010; Schwarcz, Reference Schwarcz2020; Wallach et al., Reference Wallach, Reuel and Kaspersen2022; Guruparan & Zerk, Reference Guruparan and Zerk2021). Soft law documents like the HCOC can shape compliance behaviors either as the foundations for good corporate governance or in anticipation of further regulation; they can serve as a reference in private contracts; and can even factor into civil or tort liability decisions.Footnote 69Moreover, such frameworks can provide stability and certainty in an evolving regulatory landscape, enabling organizations to navigate complex AI governance requirements effectively. Second, the HCOC may be integrated into each jurisdiction’s regulatory framework in a direct manner.Footnote 70 G7 nations are generally poised to either introduce new regulations or revise existing structures on AI governance.Footnote 71 If these regulations draw upon the HCOC – whether by reference, content consistency, or formal incorporation – this will increase and facilitate regulatory interoperability as well as international cohesion, integrating an AI governance framework that promotes human rights, democracy and the rule of law.
This section explores the space the HCOC holds within the G7 regulatory context and how it can foster interoperability between the legislative frameworks of different G7 jurisdictions on advanced AI systems. For this, the section first (1) examines the current state of AI regulation within each G7 member state. This analysis assesses the compatibility between the HCOC principles and existing G7 member frameworks. Notably, a significant overlap already exists between the core elements of the G7 nations’ regulatory documents and the HCOC.Footnote 72 Second, (2) building on this compatibility, the section explores various avenues for integrating the HCOC into the regulatory frameworks of G7 member states. By exploring these options, the section identifies the most effective means of leveraging the HCOC to achieve interoperability in G7 AI governance.
4.1 Status of AI governance in the G7 and HCOC as common guidance
The HCOC serves as a central reference point in the evolving global landscape of AI governance. This section provides insight into how HCOC aligns with the existing frameworks in G7 jurisdictions, including Canada, the EU, Japan, the UK and the United States. Next, the section contains an AI-focused overview of each jurisdiction’s regulatory status, identifies the documents that closely align with the HCOC’s structure and functionality, and evaluates their compatibility with the HCOC’s content. The summary of the analysis is shown in the “Annex.”
1. Canada: Canada is in the process of formulating a comprehensive regulatory framework for AI under Bill C-27, known as AIDA.Footnote 73 This legislation prioritizes risk mitigation for “high-impact” AI systems.Footnote 74 Additionally, Canada has published a Voluntary Code of Conduct for Responsible Development and Management of Advanced Generative AI Systems,Footnote 75 offering nonbinding guidelines for AI industry stakeholders.
2. European Union: The EU has positioned itself at the forefront of AI regulation with the AI Act, published in July 2024.Footnote 76 This legislation sets a robust and comprehensive framework for trustworthy AI development and implementation, emphasizing a risk-based regulatory approach.Footnote 77 The AI Act mandates the development of codes of practice to guide its implementation, ensuring alignment with international standards as well as evolving technology and market trends.Footnote 78
3. Japan: Japan’s approach to AI governance emphasizes maximizing the positive societal impacts of AI and capitalizing on a risk-based and agile governance model.Footnote 79 Taking a sector-specific approach, Japan seeks to promote AI implementation through regulatory reforms tailored to specific industries and markets, such as transportation, finance, and medical devices.Footnote 80 This strategy includes updating more than 10,000 regulations or ordinances that require “analog” compliance methods, including requirements for paper documents, on-site periodic inspections and dedicated in-person staffing.Footnote 81 In addition, Japan launched the AI Guidelines for BusinessFootnote 82 as a voluntary AI risk management tool. The principles for advanced AI systems established in the HIGP are directly integrated into these guidelines, following Japan’s presidency of the G7 during the HAIP Comprehensive Framework drafting process.
4. United Kingdom: The UK is developing a decentralized regulatory approach focusing on sector-specific guidelines, a pro-innovation stance and public–private collaboration through specialized AI institutions.Footnote 83 While the UK is not currently enforcing a comprehensive AI law or drafting a central code of conduct, it emphasizes traditional AI governance principles such as safety, security, transparency, and fairness to inform its sector-driven regulations.Footnote 84 The UK Department for Science, Innovation and Technology also published a practical guidance code in the form of the Emerging Processes for Frontier AI SafetyFootnote 85 ahead of the UK AI Safety Summit. The summit culminated in the Bletchley Declaration, a shared commitment to safe and responsible AI development signed by 28 nations and the EU.Footnote 86
5. United States: The United States has adopted a decentralized, multitiered regulatory strategy for AI governance, with agencies overseeing sector-specific regulations.Footnote 87 Key initiatives include the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,”Footnote 88 which directs sector-specific agencies to formulate regulations; the Risk Management Frameworks,Footnote 89 developed by NIST to provide guidelines for risk assessment and management; the “White House’s Blueprint for an AI Bill of Rights,”Footnote 90 outlining foundational principles for AI development; and nonbinding voluntary commitments for ensuring safe, secure and trustworthy AIFootnote 91 endorsed by companies such as Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Nvidia and OpenAI, among others.
4.2 Achieving and enhancing regulatory interoperability: The HCOC as a reference point for AI governance development
The AI governance landscape across the G7 is complex and multifaceted. The EU has instituted robust and comprehensive regulations through its AI Act, and Canada is in the process of developing similar hard-law frameworks.Footnote 92 Conversely, the United States, Japan and the UK lean toward sector-specific and lighter-touch regulatory approaches.Footnote 93 This diverse regulatory environment, marked by varying levels of stringency, scope and focus, poses challenges for global operations, requiring businesses to navigate a complex regulatory patchwork, as well as differing rights and obligations across G7 nations. The HCOC holds promise as a unifying mechanism, to bridge these regulatory disparities and promote interoperability.
The HCOC may be integrated into national regulations across G7 countries through various means, such as direct formal legal referencing or recognition, material content integration, and leveraging or harmonizing specific aspects of regulatory developments. Pathways for integration into the regulatory frameworks of the G7 jurisdictions include the following:
• Canada: Overall, Canada’s Voluntary Code of Conduct specifically, and its regulatory trajectory generally, demonstrate alignment with the international conversation on ethical AI development and the HCOC’s principles.Footnote 94 As AIDA evolves, it presents the potential to translate these principles into enforceable regulations, further solidifying Canada’s commitment to responsible AI advancement. Given that AIDA could likely address advanced AI systems specifically within its regulatory scope, this upcoming law opens a clear possibility to find common ground with the HCOC’s principles and functionality.
• European Union: The EU AI Act mandates the development of codes of practice that complement its implementation.Footnote 95 These codes of practice align with the HCOC’s focus, addressing practical aspects of responsible AI development. Furthermore, the EU acknowledges in the EU AI act that international standards should play a role in shaping these codes of practice,Footnote 96 presenting an opportunity to materially integrate or formally reference the HCOC in the EU AI governance framework.
• Japan: In February 2024, the Liberal Democratic Party proposed the concept note for the Basic Law for the Promotion of Responsible AI.Footnote 97 The proposed legislation specifically targets advanced foundational AI models with significant societal impact. It requires model developers to adhere to seven key measures,Footnote 98 including third-party vulnerability checks and the disclosure of model specifications. The requirements align with the voluntary commitments the White House has requested from U.S. companies.Footnote 99 The HCOC could serve as a valuable reference point for implementation of these key measures, especially considering that the HCOC principles are already integrated into Japan’s AI Guidelines for Business.Footnote 100
• United Kingdom: Besides leading international discussions on AI governance through initiatives such as the Bletchley Declaration,Footnote 101 the UK is proactively formulating its own AI governance framework. According to “A Pro-innovation Approach to AI Regulation,”Footnote 102 the UK government is undergoing technical policy analysis on regulation and life-cycle accountability of capable general-purpose systems. It also commits to updating the Emerging Processes for Frontier AI, Safety,Footnote 103 which is highly compatible with the HCOC, by the end of 2024. For these purposes, the UK is opting for collaborative private-public development through institutions such as the Digital Regulation Cooperation ForumFootnote 104 and the AI Safety Institute.Footnote 105 Considering the current institutional inertia and the stalled progress regarding its draft intellectual property code,Footnote 106 the UK could leverage the HCOC and its international scope to inform these regulatory initiatives.
• United States: The United States is in active development of its AI governance frameworks. The AI executive order has directed multiple agencies to deliver sector-specific guidance publications, and as of August 2024 there are more than 105 draft bills addressing AI, with over 35 focused on risk mitigation.Footnote 107 Notably, after releasing RMF 1.0 in January 2023, NIST established the Generative AI Public Work GroupFootnote 108 to spearhead development of a cross-sectoral AI RMF profile for managing the risks of generative AI models or systems.Footnote 109 The HCOC’s emphasis on responsible risk management and governance aligns seamlessly with the United States’ principles-based trajectory and could fit into proposed risk mitigation legislation, positioning the HCOC as a crucial reference in shaping AI regulatory policy in the United States.
5. HCOC 2.0: Next steps toward a more harmonized and impactful AI governance framework
The current AI governance landscape is characterized by jurisdictional fragmentation, with disparate national regulations imposing varying obligations on developers and offering inconsistent protections to users. While the HCOC holds promise for harmonizing G7 approaches and inspiring broader international cooperation, its lack of specificity currently limits its practical utility. The following section posits that, to realize the HCOC’s full potential, future G7 discussions should prioritize development in key areas such as (1) terminology and definitional interoperability, (2) risk management, (3) stakeholder engagement, (4) ethical considerations and (5) further areas for exploration not currently contained in the HCOC. By establishing a robust and adaptable framework, the G7 can position the HCOC as a global benchmark for responsible AI development, anchored in shared values of human rights, democracy and the rule of law.
5.1 Terminology and definitions: Indexing a common vocabulary
The HCOC can serve as a foundation for a consistent definition or methodology for identification of terms for advanced AI systems governance, facilitating smoother regulatory implementation across jurisdictions. Future terminology consensus includes the following:
• Bridge the terminology gap: The HCOC can endorse consistent definitions for streamlined regulatory implementation across jurisdictions, fostering a common understanding of critical concepts. This could be achieved by including a glossary of key terms with clear, agreed-upon definitions or by establishing methodologies for identifying and classifying AI systems based on the factors relevant to risk assessment. By establishing a common language, the HCOC can ease communication, regulatory certainty and business-sector collaboration across borders. Underscoring the importance of shared language around AI, the EU and the United States are currently in the development of 65 key terms “essential to understanding risk-based approaches to AI.”Footnote 110 Notably, even when common terminology has been developed (e.g., through the U.S.-EU Trade and Technology Council, OECD or the ISO), the definition of advanced AI systems is unclear, leaving the question of which criteria (e.g., floating point operations, quality and size of data set, or input and output modalities)Footnote 111 should be used to determine advanced AI systems.
5.2 Risk management and governance: Building a common and robust framework
Effective risk management stands as a cornerstone of responsible development of advanced AI systems. The HCOC can significantly contribute to this endeavor by advocating for shared principles and best practices. Risk management cohesion across jurisdictions includes the following:
• Identify and share security risks, particularly systemic risks: The HCOC can enhance its interoperability contribution by explicitly listing and addressing security risks, particularly those with systemic consequences. This can be achieved through a two-pronged approach. First, the HCOC can integrate a comprehensive list of typical AI risks common to advanced AI systems, such as AI hallucinations (generating inaccurate outputs), fake content generation (deepfakes), intellectual property infringement (copyrighted content integration in data sets), job market transformations due to automation, the environmental impact of AI systems, bias amplification based on training data and privacy concerns, among others. Case studies can be implemented through “project-based cooperation,” which constitutes the fourth element of the Comprehensive Framework. Second, the HCOC can establish a risk assessment framework to categorize AI systems based on their potential for harm. This framework could leverage existing models such as the EU AI Act’s categorization of general-purpose AI models with systemic risk and its classification rules for high-risk AI systems.Footnote 112 By prioritizing systems with the greatest potential for systemic or high-impact issues, the HCOC can provide a clearer road map for identifying, understanding, and mitigating various risks.
• Enhance clarity in the risk management process: The HCOC can encourage the development of standardized risk management policies tailored to specific AI applications. Future drafting can reference or draw insights from established RMFs, such as ISO/IEC 42001:2023 or NIST’s RMF – especially the RMF developed by the Generative AI Public Working GroupFootnote 113 in July 2024. Additionally, policies can incorporate learnings from other reputable sources to enhance clarity and comprehensiveness.
• Develop standard data governance, risk management and information security policies: Establishing robust data protection protocols is essential for building trust and mitigating risks associated with AI development. The development of standardized policies can leverage established frameworks such as ISO/IEC 27001 and ISO/IEC 27002 or NIST’s Cybersecurity Framework, which provide a structured foundation adaptable to the unique risk landscape of the development of advanced AI systems.
• Implement content authentication mechanisms: The HCOC can list reliable content authentication and provenance mechanisms to enable users to identify the originators of content or establish common labeling mechanisms to help users understand that AI has generated the content. These contributions could be based on input from the HAIP’s project-based cooperation. Authentication mechanisms can safeguard against misinformation and uphold democratic values and human rights by verifying data sources and outputs. However, it is imperative to balance these efforts with the protection of individual privacy, ensuring authentication processes do not compromise personal data. This balance is key to maintaining public trust and promoting the responsible and user-centric deployment of AI technologies.
5.3 Stakeholder engagement: Fostering transparency and accountability
Building trust in AI necessitates robust stakeholder engagement. A transparent and accountable AI development process fosters public confidence and encourages information sharing. Future pathways for stakeholder engagement include the following:
• Establish standardized formats for transparency reports: The HCOC can promote the adoption of standardized formats for transparency reports. By consolidating best practices and identifying common risks, the HCOC can offer a template for companies to self-assess and disclose relevant information consistently across jurisdictions. A potential model for standardized formatting pursuant to transparency reports is the UK Algorithmic Transparency Recording Standard.Footnote 114 Standardization would enable companies to have uniform international disclosure criteria, enhancing cross-border cohesive reporting and auditing consistency as well as allowing the public to better understand the development and operation of AI systems.
• Define clear formats for incident sharing: Encouraging adoption of clear incident-sharing formats can facilitate the exchange of information about security breaches, biases or unintended consequences observed in deployed AI systems. This collaborative approach to sharing and learning from incidents enables stakeholders to develop effective mitigation strategies, ultimately enhancing the safety and reliability of AI technologies.
5.4 Ethical and societal considerations: Upholding the rule of law, human rights and core democratic values
The G7, a group of leading democracies, has a unique opportunity to shape the global conversation around responsible AI development. The HCOC, as an initiative stemming from this group, can play a crucial role in ensuring AI development aligns with the ethical and societal considerations that underpin democratic values and secure human rights in AI development and implementation. Potential pathways to prioritizing these principles include the following:
• Reinforce the primacy of rule of law, human rights and democratic principles: The HCOC already champions these core values and emphasizes human-centric design. However, there is room for further enhancement and substantiation for practical application. For instance, the HCOC could enhance its guidance on how organizations should foster research and AI development that prioritizes the protection of fairness, privacy and intellectual property rights while also tackling global challenges such as climate change, health and education. Rather than providing detailed descriptions itself, the HCOC could reference other international agreements or widely recognized standards. Furthermore, the HCOC could strengthen democratic principles and the rule of law by highlighting due safeguards for freedom of expression, ensuring AI does not minimize dissent or impose undue restrictions on information access, guaranteeing a right to remedy for individuals adversely affected by AI and promoting transparency and accountability in AI decision-making processes. Enhancing human-centricity could involve advocating for effective oversight in high-risk applications, providing individuals with explanations regarding AI-driven decisions affecting them, and promoting inclusive design that caters to the diverse needs and perspectives of various populations to ensure equitable AI benefits.
5.5 Further areas for exploration
The HCOC can play a key role in exploring several critical areas for further development in responsible AI:
• Acknowledge special considerations for government use of AI: The HCOC can play a pivotal role in delineating special considerations for government use of AI, ensuring governmental powers in AI deployment are appropriately circumscribed and limited. Drawing inspiration from the AI TreatyFootnote 115 and leveraging principles from the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities,Footnote 116 the HCOC can establish clear guidelines that emphasize due process in developing and deploying advanced AI systems by the public sector, such as legal basis, legitimate aims, oversight, and redress, in addition to shared principles such as privacy, transparency and accountability. By aligning with these principles, the HCOC can become a democratic referent, and governments can leverage the power of AI responsibly while mitigating potential risks and fostering public trust.
● Harmonize full life cycle regulatory approaches: The HCOC can explore the potential for incorporating best practices from various jurisdictions’ regulations. This could involve elements such as certification mechanisms, robust oversight mechanisms and iterative audit controls.
∘ Certification mechanisms: The HCOC can establish a framework for certification and registration mechanisms for high-risk advanced AI systems. This system could ensure rigorous evaluation throughout the life cycle of high-risk and advanced AI systems, from pre-market integrative assessments to ongoing post-market analyses and compliance reviews. The HCOC could define risk categories and establish criteria for the need for certification.
∘ Oversight methodologies: The HCOC can emphasize the importance of effective oversight in AI systems to mitigate potential harm and address incidents effectively. In some cases, human involvement in critical AI processes is necessary, while in other cases machines can detect risks much faster and more precisely than humans. The HCOC could propose guidelines about whether to prioritize human judgment and intervention, especially in high-risk AI applications, ensuring a balance between automation and human control.
∘ Audit mechanisms: The HCOC can extend procedural cohesion beyond AI implementation by establishing common processes for iterative audits, ensuring continuous monitoring and evaluation of AI systems’ compliance with established principles and guidelines. By considering and potentially adapting existing frameworks, such as the UK Guidance on the AI Auditing Framework,Footnote 117 the HCOC can equip organizations with practical tools for ongoing evaluations. These iterative audits would allow for continuous improvement and ensure AI systems remain aligned with responsible development principles throughout their life cycle.
• Establish means for redress: The HCOC could expand discussions about redress for harms caused by advanced AI systems. This could involve exploring access to remedies and explanations for individuals affected by AI decisions in areas as diverse as copyright and intellectual property to judicial processing. As AI plays a growing role in judicial decision-making, for example, developing specific appeal mechanisms for harms caused by AI-based judicial decision making may become also crucial. The HCOC could encourage developers and deployers of advanced AI systems to provide appropriate dispute resolution mechanisms to users and harmed parties. Furthermore, to make victim relief more effective, G7 members could discuss shifting the burden of proof of damages or causal links and establishing accessible, fast and low-cost dispute resolution mechanisms for damages caused by advanced AI systems.
• Foster shared responsibility in the AI ecosystem: The HCOC addresses developers of advanced AI systems only.Footnote 118 However, its scope could expand in the future to other actors within the AI value chain, such as deployers and users of advanced AI systems. In addition, it is important to examine how to distribute responsibility and liability among stakeholders, ensuring all parties are accountable for their respective roles in potential harms.
By focusing on these key areas, the HCOC can evolve into a powerful tool for facilitating a more cohesive and effective approach to AI governance on a global scale. The HCOC’s dynamic nature positions it to bridge the gap between diverse national frameworks, fostering a future of responsible AI development for the G7 nations and beyond.
6. Conclusion
The G7 nations’ endorsement of the HIGP and the HCOC, supported by more than 40 countries through the Hiroshima AI Process Friends Group, represents a significant milestone in global AI governance.Footnote 119 This unified stance by the world’s leading democratic economies underscores a robust international commitment to advancing human-centered AI development, safeguarding individual rights, and strengthening trust in AI systems. The collective weight and global influence of the nations lending their support to this process amplifies the significance of its agreements, marking them as pivotal steps in shaping the future of AI governance.
However, for the promise of the Comprehensive Framework to be fully realized, its key practical instrument, the HCOC, requires further development. While the HCOC, as this article reveals, significantly aligns with the trajectory of existing G7 policies, it currently lacks the material specificity to provide truly effective guidance for practical implementation. Moving forward, it is crucial to engage in substantive discussions on enhancing the HCOC in several key areas. These areas include the following:
• Coordinating a common vocabulary: A unified understanding of key terms and definitions is essential for ensuring consistent interpretation of AI terms across borders.
• Developing robust RMFs and risk-based categorization: The HCOC should provide clear guidance on assessing and mitigating risks associated with advanced AI systems throughout the entire AI life cycle, from pre-market duties to post-market updates.
• Promoting harmonized stakeholder engagement: The HCOC can play a valuable role in encouraging cohesive approaches to stakeholder engagement and developing consistent transparency standards.
• Strengthening democratic and human rights principles: The HCOC should provide more concrete and actionable steps for upholding democratic values and safeguarding human rights in the context of AI development and deployment.
• Pursuing further areas for discussion: The HCOC’s potential extends beyond its current scope. The G7 can leverage this collaborative document to explore critical areas such as developing special considerations for government AI use, harmonizing life cycle regulatory practices (e.g., certification mechanisms, oversight methodologies and audit mechanisms), fostering shared responsibility within the AI ecosystem, and establishing efficient redress for AI harms.
By addressing these crucial areas, the HCOC has the potential to evolve into a truly robust and impactful instrument for global AI governance. A strengthened HCOC can serve as a valuable reference point not only for G7 nations and friends, but also for a broader international audience seeking to navigate the complexities of responsible AI development and deployment. This international alignment can help ensure the power of AI is harnessed for the benefit of all while mitigating potential risks and upholding core human values.
Funding statement
The authors declare no funding. This article is an updated version of a report published by the Center for Strategic & International Studies.
Competing interests
The authors declare no competing interests to disclose.
Annex. Mapping Jurisdictional Coverage of Key Principles in the Hiroshima Process International Code of Conduct: Alignment with National AI Regulations and Guidance

a Based on the structure of the HCOC.
b Given the structure of Canada’s Voluntary Code of Conduct, the numbers in this column correspond as follows: 1 = Accountability, 2 = Safety, 3 = Fairness and Equity, 4 = Transparency, 5 = Human Oversight and Monitoring, 6 = Validity and Robustness.cThe numbers listed in each cell indicate the section or article numbers of the corresponding documents in each country.