Published online by Cambridge University Press: 01 March 2021
Artificial intelligence (AI)-supported systems have transformative applications in the humanitarian sector but they also pose unique risks for human rights, even when used with the best intentions. Drawing from research and expert consultations conducted across the globe in recent years, this paper identifies key points of consensus on how humanitarian practitioners can ensure that AI augments – rather than undermines – human interests while being rights-respecting. Specifically, these consultations emphasized the necessity of an anchoring framework based on international human rights law as an essential baseline for ensuring that human interests are embedded in AI systems. Ethics, in addition, can play a complementary role in filling gaps and elevating standards above the minimum requirements of international human rights law. This paper summarizes the advantages of this framework, while also identifying specific tools and best practices that either already exist and can be adapted to the AI context, or that need to be created, in order to operationalize this human rights framework. As the COVID crisis has laid bare, AI will increasingly shape the global response to the world's toughest problems, especially in the development and humanitarian sector. To ensure that AI tools enable human progress and contribute to achieving the Sustainable Development Goals, humanitarian actors need to be proactive and inclusive in developing tools, policies and accountability mechanisms that protect human rights.
The views expressed herein are those of the authors and do not necessarily reflect the views of the United Nations.
1 UN General Assembly, Roadmap for Digital Cooperation: Implementation of the Recommendations of the High-Level Panel on Digital Cooperation. Report of the Secretary-General, UN Doc. A/74/821, 29 May 2020 (Secretary-General's Roadmap), para. 6, available at: https://undocs.org/A/74/821 (all internet references were accessed in December 2020).
2 See, for example, the initiatives detailed in two recent papers on AI and machine learning (ML) applications in COVID response: Luengo-Oroz, Miguel et al. , “Artificial Intelligence Cooperation to Support the Global Response to COVID-19”, Nature Machine Intelligence, Vol. 2, No. 6, 2020CrossRefGoogle Scholar; Bullock, Joseph et al. , “Mapping the Landscape of Artificial Intelligence Applications against COVID-19”, Journal of Artificial Intelligence Research, Vol. 69, 2020CrossRefGoogle Scholar, available at: www.jair.org/index.php/jair/article/view/12162.
3 Secretary-General's Roadmap, above note 1, para. 53.
4 AI is “forecast to generate nearly $4 trillion in added value for global markets by 2022, even before the COVID-19 pandemic, which experts predict may change consumer preferences and open new opportunities for artificial intelligence-led automation in industries, businesses and societies”. Ibid., para. 53.
5 McGregor, Lorna, Murray, Daragh and Ng, Vivian, “International Human Rights Law as a Framework for Algorithmic Accountability”, International and Comparative Law Quarterly, Vol. 68, No. 2, 2019CrossRefGoogle Scholar, available at: https://tinyurl.com/yaflu6ku.
6 See, for example, Bathaee, Yavar, “The Artificial Intelligence Black Box and the Failure of Intent and Causation”, Harvard Journal of Law and Technology, Vol. 31, No. 2, 2018Google Scholar; Rachel Adams and Nora Ni Loideain, “Addressing Indirect Discrimination and Gender Stereotypes in AI Virtual Personal Assistants: The Role of International Human Rights Law”, paper presented at the Annual Cambridge International Law Conference 2019, “New Technologies: New Challenges for Democracy and International Law”, 19 June 2019, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3392243.
7 See, for example, Global Privacy Assembly, “Declaration on Ethics and Data Protection in Artificial Intelligence”, Brussels, 23 October 2018, available at: http://globalprivacyassembly.org/wp-content/uploads/2019/04/20180922_ICDPPC-40th_AI-Declaration_ADOPTED.pdf; UN Global Pulse and International Association of Privacy Professionals, Building Ethics into Privacy Frameworks for Big Data and AI, 2018, available at: https://iapp.org/resources/article/building-ethics-into-privacy-frameworks-for-big-data-and-ai/.
8 For an overview, see Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy and Madhulika Srikumar, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI, Berkman Klein Center Research Publication No. 2020-1, 14 February 2020.
9 See Faine Greenwood, Caitlin Howarth, Danielle Escudero Poole, Nathaniel A. Raymond and Daniel P. Scarnecchia, The Signal Code: A Human Rights Approach to Information During Crisis, Harvard Humanitarian Initiative, 2017, p. 4, underlining the dearth of rights-based guidance for humanitarian practitioners working with big data. There are a few existing frameworks, however – most notably Data Science & Ethics Group (DSEG), A Framework for the Ethical Use of Advanced Data Science Methods in the Humanitarian Sector, April 2020, available at: https://tinyurl.com/yazcao2o. There have also been attempts to guide practitioners on humanitarian law as it applies to lethal autonomous weapons systems, including the Asser Institute's Designing International Law and Ethics into Military AI (DILEMA) project, available at: www.asser.nl/research/human-dignity-and-human-security/designing-international-law-and-ethics-into-military-ai-dilema.
10 Secretary-General's Roadmap, above note 1, para. 50.
11 UNGA Res. 73/179, 2018.
12 HRC Res. 42/15, 2019.
13 UNGA Res. 73/179, 2018.
14 Consultations include practical workshops on designing frameworks for ethical AI in Ghana and Uganda; on AI and privacy in the global South at RightsCon in Tunis; on a human rights-based approach to AI in Geneva, co-hosted with UN Human Rights; several events at the Internet Governance Forum in Berlin; and a consultation on ethics in development and humanitarian contexts, co-hosted with the International Association of Privacy Professionals and the European Data Protection Supervisor. These various consultations, which took place between 2018 and 2020, included experts from governments, international organizations, civil society and the private sector, from across the globe.
15 See the UN Global Pulse Expert Group on Governance of Data and AI website, available at: www.unglobalpulse.org/policy/data-privacy-advisory-group/.
16 See the OCHA, Data Responsibility Guidelines: Working Draft, March 2019, available at: https://tinyurl.com/y64pcew7.
17 ICRC, Handbook on Data Protection in Humanitarian Action, Geneva, 2017.
18 F. Greenwood et al., above note 9.
19 Access Now, Human Rights in the Age of Artificial Intelligence, 2018, available at: www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf.
20 Article 19, Governance with Teeth: How Human Rights can Strengthen FAT and Ethics Initiatives on Artificial Intelligence, April 2019, available at: www.article19.org/wp-content/uploads/2019/04/Governance-with-teeth_A19_April_2019.pdf.
21 USAID Center for Digital Development, Reflecting the Past, Shaping the Future: Making AI Work for International Development, 2018.
22 DSEG, above note 9.
23 Jack M. Balkin, “2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data”, Ohio State Law Journal, Vol. 78, No. 5, 2017, p. 1219 (cited in L. McGregor, D. Murray and V. Ng, above note 5, p. 310). See also the European Union definition of artificial intelligence: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.” European Commission, “A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines”, 8 April 2019, available at: https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines.
24 See “Common ML Problems” in Google's Introduction to Machine Learning Problem Framing course, available at: https://developers.google.com/machine-learning/problem-framing/cases.
25 Tao Liu, “An Overview of the Application of AI in Development Practice”, Berkeley MDP, available at: https://mdp.berkeley.edu/an-overview-of-the-application-of-ai-in-development-practice/.
26 L. McGregor, D. Murray and V. Ng, above note 5, p. 310.
27 For good definitions of each of these terms, see Access Now, above note 19, p. 8.
28 See UN Global Pulse's PulseSatellite project, available at: www.unglobalpulse.org/microsite/pulsesatellite/.
29 Examples include AtlasAI, EzyAgric, Apollo, FarmForce, Tulaa and Fraym.
30 See, for example, Kimetrica's Methods for Extremely Rapid Observation of Nutritional Status (MERON) tool, a project run in coordination with UNICEF that uses facial recognition to remotely diagnose malnutrition in children.
31 For more examples of AI projects in the humanitarian sector, see International Telecommunications Union, United Nations Activities on Artificial Intelligence (AI), 2019, available at: www.itu.int/dms_pub/itu-s/opb/gen/S-GEN-UNACT-2019-1-PDF-E.pdf; accepted papers of the Artificial Intelligence for Humanitarian Assistance and Disaster Response Workshop, available at: www.hadr.ai/accepted-papers; and the list of projects in DSEG, above note 9, Chap. 3.
32 UN Secretary-General's Independent Expert Advisory Group on a Data Revolution for Sustainable Development, A World That Counts: Mobilising the Data Revolution for Sustainable Development, 2014.
33 See UN Department of Economic and Social Affairs, “Least Developed Countries”, available at: www.un.org/development/desa/dpad/least-developed-country-category.html.
34 DSEG, above note 9, p. 3.
35 Rudin, Cynthia and Radin., Joanna “Why Are We Using Black Box Models in AI When We Don't Need To?”, Harvard Data Science Review, Vol. 1, No. 2, 2019CrossRefGoogle Scholar, available at: https://doi.org/10.1162/99608f92.5a8a3a3d.
36 See Buiten, Miriam C., “Towards Intelligent Regulation of Artificial Intelligence”, European Journal of Risk Regulation, Vol. 10, No. 1, 2019CrossRefGoogle Scholar, available at: https://tinyurl.com/y8wqmp9a; Anna Jobin, Marcello Ienca and Effy Vayena, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence, Vol. 1, No. 9, 2019, available at: www.nature.com/articles/s42256-019-0088-2.pdf.
37 See, for example, L. McGregor, D. Murray and V. Ng, above note 5, p. 319, explaining the various risks caused by a lack of transparency and explainability: “as the algorithm's learning process does not replicate human logic, this creates challenges in understanding and explaining the process”.
38 David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc. A/73/348, 29 August 2018, para. 40.
39 Ibid., speaking about the application of AI in the online information environment.
40 DSEG, above note 9, p. 7.
41 Isabel Ebert, Thorsten Busch and Florian Wettstein, Business and Human Rights in the Data Economy: A Mapping and Research Study, German Institute for Human Rights, Berlin, 2020.
42 Lindsey Andersen, “Artificial Intelligence in International Development: Avoiding Ethical Pitfalls”, Journal of Public and International Affairs, 2019, available at: https://jpia.princeton.edu/news/artificial-intelligence-international-development-avoiding-ethical-pitfalls.
43 D. Kaye, above note 38, para. 8.
44 See HRC, Question of the Realization of Economic, Social and Cultural Rights in All Countries: The Role of New Technologies for the Realization of Economic, Social and Cultural Rights. Report of the Secretary-General, UN Doc. A/HRC/43/29, 4 March 2020 (ESCR Report), p. 10. See also Ana Beduschi, “Research Brief: Human Rights and the Governance of AI”, Geneva Academy, February 2020, p. 3: “[D]ue to the increasingly sophisticated ways in which online platforms and companies track online behaviour and individuals’ digital footprints, AI algorithms can make inferences about behaviour, including relating to their political opinions, religion, state of health or sexual orientation.”
45 This partly explains the pushback against facial recognition and other biometric identification technology. See, for example, The Engine Room and Oxfam, Biometrics in the Humanitarian Sector, March 2018; Mark Latonero, “Stop Surveillance Humanitarianism”, New York Times, 11 July 2019; Dragana Kaurin, Data Protection and Digital Agency for Refugees, World Refugee Council Research Paper No. 12, May 2019.
46 ESCR Report, above note 44, p. 10.
47 D. Kaye, above note 38, paras 37–38.
48 Karen Hao, “This Is How AI Bias Really Happens – and Why It's So Hard to Fix”, MIT Technology Review, 4 February 2019, available at: www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/. For further explanation of the types of biases that are commonly present in a data sets or training models, see DSEG, above note 9.
49 K. Hao, above note 48; Buolamwini, Joy and Gebru, Timnit, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, Proceedings of Machine Learning Research, Vol. 81, 2018Google Scholar; Inioluwa Deborah Raji and Joy Buolamwini, Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products, 2019.
50 “Humanitarian action must be carried out on the basis of need alone, giving priority to the most urgent cases of distress and making no distinctions on the basis of nationality, race, gender, religious belief, class or political opinions.” OCHA, “OCHA on Message: Humanitarian Principles”, June 2012, available at: www.unocha.org/sites/dms/Documents/OOM-humanitarianprinciples_eng_June12.pdf.
51 See, for example, this discussion on the implications of automated weapons systems for international humanitarian law: Noel Sharkey, “The Impact of Gender and Race Bias in AI”, ICRC Humanitarian Law and Policy Blog, 28 August 2018, available at: https://blogs.icrc.org/law-and-policy/2018/08/28/impact-gender-race-bias-ai/.
52 DSEG, above note 9, p. 29.
53 Based on our Geneva consultations.
54 For a discussion on the challenges of collecting and analyzing data on migrant populations, see Natalia Baal and Laura Ronkainen, Obtaining Representative Data on IDPs: Challenges and Recommendations, UNHCR Statistics Technical Series No. 2017/1, 2017, available at: www.unhcr.org/598088104.pdf.
55 The UN Data Strategy of 2020 strongly emphasizes the need for capacity-building among civil servants across the UN in the areas of data use and emerging technologies.
56 Michael Chui et al., Notes from the AI Frontier: Modeling the Impact of AI on the World Economy, McKinsey Global Institute, September 2018.
57 On the data gap (concerning older persons), see HRC, Enjoyment of All Human Rights by Older Persons, UN Doc. A/HRC/42/43, 4 July 2019; HRC, Human Rights of Older Persons: The Data Gap, UN Doc. A/HRC/45/14, 9 July 2020.
58 Jasmine Wright and Andrej Verity, Artificial Intelligence Principles for Vulnerable Populations in Humanitarian Contexts, Digital Humanitarian Network, January 2020, p. 15.
59 See, for example, relevant sections in OCHA's Data Responsibility Guidelines, above note 16; the ICRC Handbook on Data Protection in Humanitarian Action, above note 17; and the Principles for Digital Development, available at: https://digitalprinciples.org/.
60 Secretary-General's Roadmap, above note 1, para. 23.
61 “Algorithms’ automation power can be useful, but can also alienate human input from processes that affect people. The use or over-use of algorithms can thus pose risks to populations affected by algorithm processes, as human input to such processes is often an important element of protection or rectification for affected groups. Algorithms can often deepen existing inequalities between people or groups, and exacerbate the disenfranchisement of specific vulnerable demographics. Algorithms, more so than other types of data analysis, have the potential to create harmful feedback loops that can become tautological in nature, and go unchecked due to the very nature of an algorithm's automation.” DSEG, above note 9, p. 29.
62 Petra Molnar and Lex Gill, Bots at the Gates, University of Toronto International Human Rights Program and Citizen Lab, 2018.
63 DSEG, above note 9, p. 11.
64 Based on our Geneva consultations. See also Arun, Chinmayi, “AI and the Global South: Designing for Other Worlds”, in Dubber, Markus D., Pasquale, Frank and Das, Sunit (eds), The Oxford Handbook of Ethics of AI, Oxford University Press, Oxford, 2020Google Scholar.
65 D. Kaye, above note 38, para. 44.
66 UNESCO, Preliminary Study on the Ethics of Artificial Intelligence, SHS/COMEST/EXTWG-ETHICS-AI/2019/1, 26 February 2019, para. 22.
67 See, for example, the Principles for Digital Development, above note 59.
68 See UN Human Rights, UN Human Rights Business and Human Rights in Technology Project (B-Tech): Overview and Scope, November 2019, warning of the inherent human rights risks in “[s]elling products to, or partnering with, governments seeking to use new tech for State functions or public service delivery that could disproportionately put vulnerable populations at risk”.
69 Philip Alston, Report of the Special Rapporteur on Extreme Poverty and Human Rights, UN Doc. A/74/493, 11 October 2019.
70 AI Now Institute, Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems, September 2018, available at: https://ainowinstitute.org/litigatingalgorithms.pdf; P. Alston, above note 69. Note that even a perfectly designed system with humans in the loop can still lead to bad outcomes if it is not the right approach in a given context. For instance, widespread, deeply rooted discrimination in an oppressive environment may actually have the effect of entrenching discrimination further, even if the AI system itself is not biased and there is a human in the loop.
71 Henry McDonald. “Home Office to Scrap ‘Racist Algorithm’ for UK Visa Applicants”, The Guardian, 4 August 2020.
72 DSEG, above note 9, p. 3.
73 J. Wright and A. Verity, above note 58, p. 7.
74 Ibid., p. 6.
75 Ibid., p. 9. See also the Humanitarian Technologies Project website, available at: http://humanitariantechnologies.net.
76 See DSEG, above note 9, p. 8, warning against piloting unproven technology in humanitarian contexts.
77 Peter Cihon, Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development, Future of Humanity Institute, University of Oxford, April 2019.
78 “[T]he complex nature of algorithmic decision-making necessitates that accountability proposals be set within a wider framework, addressing the overall algorithmic life cycle, from the conception and design phase, to actual deployment and use of algorithms in decision-making.” L. McGregor, D. Murray and V. Ng, above note 5, p. 311.
79 For a summary of AI codes of ethics released by major institutions, see J. Fjeld et al., above note 8.
80 Ibid.
81 See Mark Latonero, Governing Artificial Intelligence: Upholding Human Rights and Dignity, Data & Society, 2018, arguing that human rights do not tend to be central to national AI strategies, with a few exceptions that include the EU's GDPR and strategy documents issued by the Council of Europe, the Canada and France-led Global Partnership on AI, and the Australian Human Rights Commission.
82 See P. Alston, above note 69, arguing that most AI ethics codes refer to human rights law but lack its substance and that token references are used to enhance the code's claims to legitimacy and universality.
83 Corinne Cath, Mark Latonero, Vidushi Marda and Roya Pakzad, “Leap of FATE: Human Rights as a Complementary Framework for AI Policy and Practice”, in FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, available at: https://doi.org/10.1145/3351095.3375665.
84 Ibid.
85 Consultations include meetings and workshops held by Global Pulse and UN Human Rights in Geneva, Berlin and Tunis.
86 L. McGregor, D. Murray and V. Ng, above note 5, p. 313.
87 Ibid.
88 “[Human rights] are considered universal, both because they are universally recognised by virtually each country in the world, and because they are universally applicable to all human beings regardless of any individual trait.” Nathalie A. Smuha, “Beyond a Human Rights-based Approach to AI Governance: Promise, Pitfalls, Plea”, Philosophy and Technology, 2020 (forthcoming).
89 Ibid.
90 L. McGregor, D. Murray and V. Ng, above note 5, p. 311.
91 Ibid.
92 Lyal S. Sunga, “The International Court of Justice's Growing Contribution to Human Rights and Humanitarian Law,” The Hague Institute for Global Justice, The Hague, 18 April 2016.
93 UN Human Rights, “Regional Human Rights Mechanisms and Arrangements”, available at: www.ohchr.org/EN/Countries/NHRI/Pages/Links.aspx.
94 C. Cath et al., above note 83.
95 Christian van Veen and Corinne Cath, “Artificial Intelligence: What's Human Rights Got to Do With It?”, Data & Society, 14 May 2018, available at: https://points.datasociety.net/artificial-intelligence-whats-human-rights-got-to-do-with-it-4622ec1566d5.
96 L. McGregor, D. Murray and V. Ng, above note 5.
97 See ESCR Report, above note 44; “Standards of Accessibility, Adaptability, and Acceptability”, Social Protection and Human Rights, available at: https://socialprotection-humanrights.org/framework/principles/standards-of-accessibility-adaptability-and-acceptability/.
98 Karen Yeung, Andrew Howes and Ganna Pogrebna, “AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing”, in Markus D. Dubber, Frank Pasquale and Sunit Das (eds), The Oxford Handbook of Ethics of AI, Oxford University Press, Oxford, 2020, noting that IHRL provides a “[s]tructured framework for reasoned resolution of conflicts arising between competing rights and collective interests in specific cases”, whereas AI ethics codes offer “little guidance on how to resolve such conflicts”.
99 Limitations on a right, where permissible, must be necessary for reaching a legitimate aim and must be in proportion to that aim. They must be the least intrusive option available, and must not be applied or invoked in a manner that would impair the essence of a right. They need to be prescribed by publicly available law that clearly specifies the circumstances under which a restriction may occur. See ESCR Report, above note 44, pp. 10–11. See also N. A. Smuha, above note 88, observing that similar formulas for balancing competing rights are found in the EU Charter, the European Convention of Human Rights, and Article 29 of the UDHR.
100 Catelijne Muller, The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law, Ad Hoc Committee on Artificial Intelligence, Strasbourg, 24 June 2020, para. 75, available at: https://rm.coe.int/cahai-2020-06-fin-c-muller-the-impact-of-ai-on-human-rights-democracy-/16809ed6da. McGregor et al. draw red lines from “the prohibition of arbitrary rights interference as a core principle underpinning IHRL [that is] relevant to all decisions that have the potential to interfere with particular rights”. L. McGregor, D. Murray and V. Ng, above note 5, p. 337. For more on the relationship between “arbitrary” and “necessary and proportionate”, see UN Human Rights, The Right to Privacy in the Digital Age: Report of the Office of the United Nations High Commissioner for Human Rights, UN Doc. A/HRC/27/37, 30 June 2014, para. 21 ff.; UN Human Rights, The Right to Privacy in the Digital Age: Report of the United Nations High Commissioner for Human Rights, UN Doc. A/HRC/39/29, 3 August 2018, para. 10.
101 IHRL “provides a clear framework for balancing competing interests in the development of technology: its tried and tested jurisprudence requires restrictions to human rights (like privacy or non-discrimination) to be prescribed by law, pursue a legitimate aim, and be necessary and proportionate to that aim. Each term is a defined concept against which actions can be objectively measured and made accountable.” Alison Berthet, “Why Do Emerging AI Guidelines Emphasize ‘Ethics’ over Human Rights?” OpenGlobalRights, 10 July 2019, available at: www.openglobalrights.org/why-do-emerging-ai-guidelines-emphasize-ethics-over-human-rights.
102 “Furthermore, to do so, enforcers can draw on previously undertaken balancing exercises, which advances predictability and legal certainty. Indeed, decades of institutionalised human rights enforcement resulted in a rich jurisprudence that can guide enforcers when dealing with the impact of AI-systems on individuals and society and with the tensions stemming therefrom – be it in terms of conflicting rights, principles or interests.” N. A. Smuha, above note 88.
103 For further guidance on how to craft a human rights-focused impact assessment, see UN Human Rights, Guiding Principles on Business and Human Rights, New York and Geneva, 2011 (UNGPs), available at: www.ohchr.org/documents/publications/guidingprinciplesbusinesshr_en.pdf; ESCR Report, above note 44.
104 The Principles are available at: www.eff.org/files/necessaryandproportionatefinal.pdf. For background and legal analysis, see Electronic Frontier Foundation and Article 19, Necessary and Proportionate: International Principles on the Application of Human Rights to Communication Surveillance, May 2014, available at: www.ohchr.org/Documents/Issues/Privacy/ElectronicFrontierFoundation.pdf.
105 ICRC, Privacy International, UN Global Pulse and OCHA Centre for Humanitarian Data, “Guidance Note: Data Impact Assessments”, Guidance Note Series No. 5, July 2020, available at: https://centre.humdata.org/wp-content/uploads/2020/07/guidance_note_data_impact_assessments.pdf. See this Guidance Note for more examples of impact assessments designed for humanitarian contexts.
106 John H. Knox, “Horizontal Human Rights Law”, American Journal of International Law, Vol. 102, No. 1, 2008, p. 1.
107 See I. Ebert, T. Busch and F. Wettstein, above note 41. And see C. van Veen and C. Cath, above note 95, arguing that “[h]uman rights, as a language and legal framework, is itself a source of power because human rights carry significant moral legitimacy and the reputational cost of being perceived as a human rights violator can be very high”. For context on algorithmic systems, see Council of Europe, Recommendation CM/Rec(2020)1 of the Committee of Ministers to Member States on the Human Rights Impacts of Algorithmic Systems, 8 April 2020.
108 UNGPs, above note 103. Pillar I of the UNGPs outlines how States should regulate companies.
109 Ibid., Pillar II. See also UN Human Rights, Key Characteristics of Business Respect for Human Rights, B-Tech Foundational Paper, available at: www.ohchr.org/Documents/Issues/Business/B-Tech/key-characteristics-business-respect.pdf.
110 See Council of Europe, Addressing the Impacts of Algorithms on Human Rights: Draft Recommendation, MSI-AUT(2018)06rev3, 2018: “Private sector actors engaged in the design, development, sale, deployment, implementation and servicing of algorithmic systems, whether in the public or private sphere, must exercise human rights due diligence. They have the responsibility to respect internationally recognised human rights and fundamental freedoms of their customers and of other parties who are affected by their activities. This responsibility exists independently of States’ ability or willingness to fulfil their human rights obligations.” See also D. Kaye, above note 38.
111 HRC, Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, including Peaceful Protests: Report of the UN High Commissioner for Human Rights, UN Doc. A/HRC/44/24, 24 June 2020.
112 UN Human Rights, The UN Guiding Principles in the Age of Technology, B-Tech Foundational Paper, available at: www.ohchr.org/Documents/Issues/Business/B-Tech/introduction-ungp-age-technology.pdf.
113 Examples include Microsoft's human rights impact assessment (HRIA) and Google's Celebrity Recognition HRIA; and see Element AI, Supporting Rights-Respecting AI, 2019; Telefonica, “Our Commitments: Human Rights,” available at: www.telefonica.com/en/web/responsible-business/human-rights.
114 L. McGregor, D. Murray and V. Ng, above note 5.
115 Microsoft has produced a number of publications on its FATE work. See “FATE: Fairness, Accountability, Transparency, and Ethics in AI”, available at: www.microsoft.com/en-us/research/group/fate#!publications.
116 C. Cath et al., above note 83.
117 For useful background on the pros and cons of the AI ethics and human rights frameworks, see Business for Social Responsibility (BSR) and World Economic Forum (WEF), Responsible Use of Technology, August 2019, p. 7 (arguing that ethics and human rights should be “synergistic”).
118 Access Now, above note 19.
119 Ben Wagner, “Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?”, in Emre Bayamlioglu, Irina Baraliuc, Liisa Janssens and Mireille Hildebrandt (eds), Being Profiled: Cogitas Ergo Sum. 10 Years of Profiling the European Citizen, Amsterdam University Press, Amsterdam, 2018.
120 Based on our Geneva consultations. See also Josh Cowls and Luciano Floridi, “Prolegomena to a White Paper on an Ethical Framework for a Good AI Society”, June 2018, available at https://papers.ssrn.com/abstract=3198732.
121 Ibid., arguing that ethics and human rights can be mutually enforcing and that ethics can go beyond human rights. See also BSR and WEF, above note 117.
122 Access Now, above note 19, p. 17.
123 BSR and WEF, above note 117.
124 Miguel Luengo-Oroz, “Solidarity Should Be a Core Ethical Principle of AI”, Nature Machine Intelligence, Vol. 1, No. 11, 2019.
125 See, for example, the UN Global Pulse “Projects” web page, available at: www.unglobalpulse.org/projects/.
126 UN, UN Secretary-General's Strategy on New Technologies, September 2018, available at: www.un.org/en/newtechnologies/.
127 High-Level Panel on Digital Cooperation, The Age of Digital Interdependence: Report of the UN Secretary-General's High-Level Panel on Digital Cooperation, June 2019 (High-Level Panel Report), available at: https://digitalcooperation.org/wp-content/uploads/2019/06/DigitalCooperation-report-web-FINAL-1.pdf.
128 UNESCO issued a preliminary set of AI principles in 2019 and is in the process of drafting a standard-setting instrument for the ethics of AI. A revised first draft of a recommendation was presented in September 2020. Other entities, including the Organization for Economic Cooperation and Development (OECD) and the European Commission, have released their own sets of principles. OECD, Recommendation of the Council on Artificial Intelligence, 21 May 2019; European Commission, Ethics Guidelines for Trustworthy AI, 8 April 2019, available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. At the Council of Europe, the Committee of Ministers has adopted Recommendation CM/Rec(2020)1, above note 107. The Council of Europe is also investigating the possibility of adopting a legal framework for the development, design and application of AI, based on the Council of Europe's standards on human rights, democracy and the rule of law; see Council of Europe, “CAHAI – Ad Hoc Committee on Artificial Intelligence”, available at: www.coe.int/en/web/artificial-intelligence/cahai.
129 Secretary-General's Roadmap, above note 1, para. 88. See also Recommendation 3C of the High-Level Panel Report, above note 127, pp. 38–39, which reads: “[A]utonomous intelligent systems should be designed in ways that enable their decisions to be explained and humans to be accountable for their use. Audits and certification schemes should monitor compliance of artificial intelligence (AI) systems with engineering and ethical standards, which should be developed using multi-stakeholder and multilateral approaches. Life and death decisions should not be delegated to machines. … [E]nhanced digital cooperation with multiple stakeholders [is needed] to think through the design and application of … principles such as transparency and non-bias in autonomous intelligent systems in different social settings.”
130 See A. Beduschi, above note 44, arguing for technical standards that “incorporat[e] human rights rules and principles”.
131 For a breakdown of how individual UDHR rights and principles are implicated by the use of AI systems, see Access Now, above note 19.
132 A growing number of jurisdictions have issued bans on facial recognition technology, or on the use of such technology in criminal justice contexts. However, some organizations have been more hesitant to embrace red lines. See Chris Klöver and Alexander Fanta, “No Red Lines: Industry Defuses Ethics Guidelines for Artificial Intelligence”, trans. Kristina Penner, Algorithm Watch, 9 April 2019, available at: https://algorithmwatch.org/en/industry-defuses-ethics-guidelines-for-artificial-intelligence/ (where one source blames the absence of red lines in the EU's ethics guidelines on industry pressure).
133 “Although total explainability of ML-based systems is not currently possible, developers can still provide valuable information about how a system works. Publish easy-to-understand explainers in the local language. Hold community meetings to explain the tool and allow community members to ask questions and provide feedback. Take care to consider literacy levels and the broader information ecosystem. An effective public educational process utilizes the existing ways in which a community receives and shares information, whether that be print, radio, word of mouth, or other channels.” L. Andersen, above note 42,.
134 See “Common ML Problems”, above note 24.
135 See ESCR Report, above note 44, para. 52, arguing that the knowledge and understanding gap between the public and decision-makers can be “a particular problem in the context of the automated decision-making processes that rely on artificial intelligence”; that “[c]omprehensive, publicly available information is important to enable informed decision-making and the relevant consent of affected parties; and that “[r]egulations requiring companies to disclose when artificial intelligence systems are used in ways that affect the exercise of human rights and share the results of related human rights impact assessments may also be a helpful tool”. See also L. McGregor, D. Murray and V. Ng, above note 5, arguing that transparency includes why and how the algorithm was created; the logic of the model or overall design; the assumptions underpinning the design process; how performance is monitored; how the algorithm itself has changed over time; the factors relevant to the algorithm's functioning; and the level of human involvement.
136 Sam Ransbotham, “Justifying Human Involvement in the AI Decision-Making Loop”, MIT Sloan Management Review, 23 October 2017, available at: https://sloanreview.mit.edu/article/justifying-human-involvement-in-the-ai-decision-making-loop/.
137 See L. McGregor, D. Murray and V. Ng, above note 5, arguing that human-in-the-loop acts as a safeguard, ensuring that the algorithmic system supports but does not make the decision.
138 “AI is most exciting when it can both absorb large amounts of data and identify more accurate correlations (diagnostics), while leaving the causational conclusions and ultimate decision-making to humans. This human-machine interaction is particularly important for social-impact initiatives, where ethical stakes are high and improving the lives of the marginalized is the measure of success.” Hala Hanna and Vilas Dhar, “How AI Can Promote Social Good”, World Economic Forum, 24 September 2019, available at: www.weforum.org/agenda/2019/09/artificial-intelligence-can-have-a-positive-social-impact-if-used-ethically/.
139 One hypothetical raised by a participant at our Geneva event was as follows: a person in a government office is using automated decision-making to decide whose child gets taken away. The algorithm gives a score of “7”. How does this score influence the operator? Does it matter if they're having a good or bad day? Are they pressured to take the score into consideration, either institutionally or interpersonally (by co-workers)? Are they personally penalized if they ignore or override the system?
140 See Rubin, Edward, “The Myth of Accountability and the Anti-administrative Impulse”, Michigan Law Review, Vol. 103, No. 8, 2005Google Scholar.
141 See UN Human Rights, above note 68, outlining the novel accountability challenges raised by AI.
142 High-Level Panel Report, above note 127, Recommendation 3C, pp. 38–39.
143 UNGPs, above note 103, para. 29: “To make it possible for grievances to be addressed early and remediated directly, business enterprises should establish or participate in effective operational-level grievance mechanisms for individuals and communities who may be adversely impacted.”
144 See I. Ebert, T. Busch and F. Wettstein, above note 41. See also Committee on the Elimination of Racial Discrimination, General Recommendation No. 36 on Preventing and Combating Racial Profiling by Law Enforcement Officials, UN Doc. CERD/C/GC/36, 17 December 2020, para. 66: “States should encourage companies to carry out human rights due diligence processes, which entail: (a) conducting assessments to identify and assess any actual or potentially adverse human rights impacts; (b) integrating those assessments and taking appropriate action to prevent and mitigate adverse human rights impacts that have been identified; (c) tracking the effectiveness of their efforts; and (d) reporting formally on how they have addressed their human rights impacts.”
145 See ESCR Report, above note 44, para. 51. The UNGPs make HRDD a key expectation of private companies. The core steps of HRDD, as provided for by the UNGPs, include (1) identifying harms, consulting with stakeholders, and ensuring public and private actors also conduct assessments (if the system will be used by a government entity); (2) taking action to prevent and mitigate harms; and (3) being transparent about efforts to identify and mitigate harms. Access Now, above note 19, pp. 34–35.
146 D. Kaye, above note 38, para. 68, noting that HRIAs “should be carried out during the design and deployment of new artificial intelligence systems, including the deployment of existing systems in new global markets”.
147 Danish Institute for Human Rights, “Human Rights Impact Assessment Guidance and Toolbox”, 25 August 2020, available at: www.humanrights.dk/business/tools/human-rights-impact-assessment-guidance-toolbox.
148 “To address the challenges and opportunities of protecting and advancing human rights, human dignity and human agency in a digitally interdependent age, the Office of the United Nations High Commissioner for Human Rights will develop system-wide guidance on human rights due diligence and impact assessments in the use of new technologies, including through engagement with civil society, external experts and those most vulnerable and affected.” Secretary-General's Roadmap, above note 1, para. 86.
149 C. Cath et al., above note 83.
150 UN Global Pulse, “Risks Harms and Benefits Assessment”, available at: www.unglobalpulse.org/policy/risk-assessment/.
151 Element AI, above note 113, p. 9.
152 UNGPs, above note 103, Commentary to Principle 16, p. 17.
153 Participants at our Geneva consultations used the term “explanatory models”, though this is not yet a widely used term.
154 UN Global Pulse, above note 150. See also OCHA, “Guidance Note: Data Responsibility in Public-Private Partnerships”, 2020, available at: https://centre.humdata.org/guidance-note-data-responsibility-in-public-private-partnerships/.
155 D. Kaye, above note 68, para. 68.
156 Ibid., para. 55.
157 “Private sector actors have raised objections to the feasibility of audits in the AI space, given the imperative to protect proprietary technology. While these concerns may be well founded, the Special Rapporteur agrees … that, especially when an AI application is being used by a public sector agency, refusal on the part of the vendor to be transparent about the operation of the system would be incompatible with the public body's own accountability obligations.” Ibid., para. 55.
158 “Each of these mechanisms may face challenges in implementation, especially in the information environment, but companies should work towards making audits of AI systems feasible. Governments should contribute to the effectiveness of audits by considering policy or legislative interventions that require companies to make AI code auditable, guaranteeing the existence of audit trails and thus greater opportunities for transparency to individuals affected.” Ibid., para. 57.
159 Based on our consultations.
160 Based on our consultations.
161 Element AI, above note 113.
162 Several UN processes that are under way may serve this purpose, including UNESCO's initiative to create the UN's first standard-setting instrument on AI ethics, and the UN Secretary-General's plans to create a global advisory body on AI cooperation.
163 Element AI, above note 113.
164 See M. Latonero, above note 81, calling for UN human rights investigators and Special Rapporteurs to continue researching and publicizing the human rights impacts of AI systems.
165 Access Now, above note 19.
166 OCHA, above note 154.
167 N. A. Smuha, above note 88.
168 For more guidance on private sector HRDD, see UNGPs, above note 19, Principle 17.