I. Introduction
Every generation has its topic: The topic of our generation is digitalization. At present, we are all witnessing the so-called industrial revolution 4.0.Footnote 1 This revolution is characterized by the use of a whole range of new digital technologies that can be combined in a variety of ways. Keywords are self-learning algorithms, Artificial Intelligence (AI), autonomous systems, Big Data, biometrics, cloud computing, Internet of Things, mobile internet, robotics, and social media.Footnote 2
The use of digital technologies challenges the law and those applying it. The range of questions and problems is tremendously broad.Footnote 3 Widely discussed examples are self-driving cars,Footnote 4 the use of digital technologies in corporate finance, credit financing and credit protection,Footnote 5 the digital estate,Footnote 6 or online dispute resolution.Footnote 7 In fact, digital technologies challenge the entire national legal system including public and criminal law as well as EU and international law. Some even say we may face ‘the beginning of the end for the law’.Footnote 8 In fact, this is not the end, but rather the time for a digital initiative. This chapter focuses on the changes that AI brings about in corporate law and corporate governance, especially in terms of the challenges for corporations and their executives.
From a conceptual perspective, AI applications will have a major impact on corporate law in general and corporate governance in particular. In practice, AI poses a tremendous challenge for corporations and their executives. As algorithms have already entered the boardroom, lawmakers must consider legally recognizing e-persons as directors and managers. The applicable law must deal with effects of AI on corporate duties of boards and their liabilities. The interdependencies of AI, delegation of leadership tasks, and the business judgement rule as a safe harbor for executives are of particular importance. A further issue to be addressed is how AI will change the decision-making process in corporations as a whole. This topic is closely connected with the board’s duties in Big Data and Data Governance as well as the qualifications and responsibilities of directors and managers.
By referring to AI, I mean information technology systems that reproduce or approximate various cognitive abilities of humans.Footnote 9 In the same breath, we need to distinguish between strong AI and weak AI. Currently, strong AI does not exist.Footnote 10 There is no system really imitating a human being, such as a so-called superintelligence. Only weak AI is applied today. These are single technologies for smart human–machine interactions, such as machine learning or deep learning. Weak AI focuses on the solution of specific application problems based on the methods from math and computer science, whereby the systems are capable of self-optimization.Footnote 11
By referring to corporate governance, I mean a system by which companies are directed and controlled.Footnote 12 In continental European jurisdictions, such as Germany, a dual board structure is the prevailing system with a management board running the day-to-day business of the firm and a supervisory board monitoring the business decisions of the management board. In Anglo-American jurisdictions, such as the United States (US) and the United Kingdom (UK), the two functions of management and supervision are combined within one unitary board – the board of directors.Footnote 13
II. Algorithms As Directors
The first question is, “Could and should algorithms act as directors?” In 2014, newspapers reported that a venture capital firm had just appointed an algorithm to its board of directors. The Hong Kong based VC firm Deep Knowledge Ventures was supposed to have appointed an algorithm called Vital (an abbreviation for Validating Investment Tool for Advancing Life Sciences) to serve as a director with full voting rights and full decision-making power over corporate measures.Footnote 14 In fact, Vital only had an observer and adviser status with regard to the board members, which are all natural persons.Footnote 15
Under German law according to sections 76(3) and 100(1)(1) AktG,Footnote 16 the members of the management board and the supervisory board must be natural persons with full legal capacity. Not even corporations are allowed to serve as board members. That means, in order to appoint algorithms as directors, the law must be changed.Footnote 17 Actually, the lawmaker could legally recognize e-persons as directors. However, the lawmaker should not do so, because there is a reason for the exclusion of legal persons and algorithms under German law. Both lack personal liability and personal accountability for the management and the supervision of the company.Footnote 18
Nevertheless, the European Parliament enacted a resolution with recommendations to the Commission on Civil Law Rules on Robotics, and suggested therein
creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.Footnote 19
The most fundamental requirement for legally recognizing an e-person would be its own liability – either based on an ownership fund or based on a mandatory liability insurance. In case corporations are appointing AI entities as directors (or apply it otherwise), they should be strictly liable for damages caused by AI applications in order to mitigate the particular challenges and potential risks of AI.Footnote 20 This is because strict liability would not only delegate the risk assessment and thus control the level of care and activity, but would also create an incentive for further developing this technology.Footnote 21 At the same time, creditors of the company should be protected by a compulsory liability insurance, whereas piercing the corporate veil, that is, a personal liability of the shareholders, must remain a rare exception.Footnote 22 However, at an international level, regulatory competition makes it difficult to guarantee comparable standards. Harmonization can only be expected (if ever) in supranational legal systems, such as the European Union.Footnote 23 In this context, it is noteworthy that the EU Commission’s White Paper on AI presented in 2020 does not address the questions of the legal status of algorithms at all.Footnote 24
However, even if we were to establish such a liability safeguard, there is no self-interested action of an algorithm as long as there is no strong AI. True, circumstances may change in the future due to technological progress. However, there is a long and winding road to the notorious superintelligence.Footnote 25 Conversely, weak AI only carries out actions in the third-party interest of people or organizations, and is currently not in a position to make its own value decisions and judgemental considerations.Footnote 26 In the end, current algorithms are nothing more than digital slaves, albeit slaves with superhuman abilities. In addition, the currently applicable incentive system of corporate law and governance would have to be adapted to AI directors, because – unlike human directors – duties of loyalty can hardly be applied to them, but rather they decide according to algorithmic models.Footnote 27 At present, only humans have original creative power, only they are capable of making decisions and acting in the true sense of the word.Footnote 28
III. Management Board
Given the current limitations of AI, we will continue to have to get by with human directors for the next few decades. Although algorithms do not currently appear suitable for making independent corporate decisions, AI can nonetheless support human directors in their management and monitoring tasks. AI is already used in practice to analyze and forecast the financial development of a company, but also to identify the need for optimization in an entrepreneurial value chain.Footnote 29 In addition, AI applications are used in the run-up to mergers and acquisitions (M&A) transactions,Footnote 30 namely as part of due diligence, in order to simplify particularly labor-intensive processes when checking documents. Algorithms are also able to recognize unusual contract clauses and to summarize essential parameters of contracts, even to create contract templates themselves.Footnote 31 Further examples for the use of AI applications are cybersecurityFootnote 32 and compliance management systems.Footnote 33
1. Legal Framework
With regard to the German corporate governance system, the management board is responsible for running the company.Footnote 34 Consequently, the management board also decides on the overall corporate strategy, the degree of digitalization and the use of AI applications.Footnote 35 The supervisory board monitors the business decisions of the management board, decides on the approval of particularly important measures,Footnote 36 as well as on the appointment and removal of the management board members;Footnote 37 whereas the shareholders meeting does not determine a company’s digitalization structures.Footnote 38
2. AI Related Duties
In principle, the use of AI neither constitutes a violation of corporate law or the articles of association,Footnote 39 nor is it an expression of bad corporate governance. Even if the use of AI is associated with risks, it is difficult to advise companies – as the safest option – to forego it completely.Footnote 40 Instead, the use of AI places special demands on the management board members.
a. General Responsibilities
Managers must have a fundamental understanding of the relevant AI applications, of their potentials, suitability, and risks. However, the board members do not need to have in-depth knowledge about the detailed functioning of a certain AI application. In particular, the knowledge of an IT expert cannot be demanded, nor a detailed examination of the material correctness of the decision.Footnote 41 Rather, they need to have an understanding of the scope and limits of an application and possible results and outcomes of the application in order to perform plausibility checks to prevent incorrect decisions quickly and effectively.Footnote 42 The management board has to ensure, through test runs, the functionality of the application with regard to the concrete fulfilment of tasks in the specific company environment.Footnote 43 If, according to the specific nature of the AI application, there is the possibility of an adjustment to the concrete circumstances of the company, for example, with regard to the firm’s risk profile or statutory provisions, then the management board is obliged to carry out such an adjustment.Footnote 44 During the use of the AI, the board of directors must continuously evaluate and monitor the working methods, information procurement, and information evaluation as well as the results achieved.
The management board must implement a system that eliminates, as far as possible, the risks and false results that arise from the use of AI. This system must assure that anyone who uses AI knows the respective scope of possible results of an application so that it can be determined whether a concrete result is still within the possible range of results. However, that can hardly be determined abstractly, but requires a close look at the concrete AI application. Furthermore, the market standard is to be included in the analysis. If all companies in a certain industry use certain AI applications that are considered safe and effective, then an application by other companies will rarely prove to breach a management board’s duty of care.
Under these conditions, the management board is allowed to delegate decisions and tasks to an AI application.Footnote 45 This is not contradicted by the fact that algorithms lack legal capacity, because in this context the board’s own duties are decisive.Footnote 46 In any event, a blanket self-commitment to the results of an AI application is incompatible with the management responsibility and personal accountability of the board members.Footnote 47 At all times, the applied AI must be manageable and controllable in order to ensure that no human loss of control occurs and the decision-making process is comprehensible. The person responsible for applying AI in a certain corporate setting must always be able to operate the off-switch. In normative terms, this requirement is derived from section 91(2) AktG, which obliges the management board to take suitable measures to identify, at an early stage, developments that could jeopardize the continued existence of the company.Footnote 48 In addition, the application must be protected against external attacks, and emergency precautions must be implemented in the event of a technical malfunction.Footnote 49
b. Delegation of Responsibility
The board may delegate the responsibility for applying AI to subordinate employees, but it is required to carefully select, instruct, and supervise the delegate.Footnote 50 Under the prevailing view, core tasks, however, cannot be delegated, as board members are not allowed to evade their leadership responsibility.Footnote 51 Such non-delegable management tasks of the management board include basic measures with regard to the strategic direction, business policy and the organization of the company.Footnote 52 The decision as to whether and to what extent AI should be used in the company is also a management measure that cannot be delegated under the prevailing view.Footnote 53 Only the preparation of decisions by auxiliary persons is permissible, as long as the board of directors makes the decision personally and of its own responsibility. In this respect, the board is responsible for the selection of AI use and the application of AI in general. The board has to provide the necessary information, must exclude conflicts of interest and has to perform plausibility checks of the results obtained. Furthermore, the managers must conduct an ongoing monitoring and ensure that the assigned tasks are properly performed.
c. Data Governance
AI relies on extensive data sets (Big Data). In this respect, the management board is responsible for a wide scope and high quality of the available data, for the suitability and training of AI applications, and for the coordination of the model predictions with the objectives of the respective company.Footnote 54 In addition, the board of directors must observe data protection law limitsFootnote 55 and must pursue a non-discriminatory procedure.Footnote 56 If AI use is not in line with these regulations or other mandatory provisions, the management board violates the duty of legality.Footnote 57 In this case, the management board does not benefit from the liability privilege of the business judgement rule.Footnote 58
Apart from that, the management board has an entrepreneurial discretion with regard to the proper organization of the company’s internal knowledge organization.Footnote 59 The starting point is the management board’s duty to ensure a legal, statutory, and appropriate organizational structure.Footnote 60 The specific scope and content of the obligation to organize knowledge depends largely on the type, size, and industry of the company and its resources.Footnote 61 However, if, according to these principles, there is a breach of the obligation to store, forward, and actually query information, then the company will be considered to have acted with knowledge or negligent ignorance under German law.Footnote 62
d. Management Liability
If managers violate these obligations (and do not benefit from the liability privilege of the business judgement rule)Footnote 63, they can be held liable for damages to the company.Footnote 64 This applies in particular in the event of an inadmissible or inadequate delegation.Footnote 65 In order to mitigate the liability risk for management board members, they have to ensure that the whole framework of AI usage in terms of specific applications, competences, and responsibilities as well as the AI-related flow of information within the company is well designed and documented in detail. Conversely, board members are not liable for individual algorithmic errors as long as (1) the algorithm works reliably, (2) the algorithm does not make unlawful decisions, (3) there are no conflicts of interest, and (4) the AI’s functioning is fundamentally overseen and properly documented.Footnote 66
Comprehensive documentation of the circumstances that prompted the management board to use a certain AI and the specific circumstances of its application reduces the risk of being sued for damages by the company. This ensures, in particular, that the members of the management board can handle the burden of proof incumbent on them according to section 93(2)(2) AktG. They will achieve this better, the more detailed the decision-making process regarding the use of AI can be understood from the written documents.Footnote 67 This kind of documentation by the management board is to be distinguished from general documentation requirements discussed at the European and national level for the development of AI models and for access authorization to this documentation, the details of which are beyond the scope of this chapter.Footnote 68
e. Composition of the Management Board
In order to cope with the challenges that the use of AI applications causes, the structure and composition of the management and the board has already changed significantly. That manifests itself in the establishment of new management positions, such as a Chief Information Officer (CIO)Footnote 69 or a Chief Digital Officer (CDO).Footnote 70 Almost half of the 40 largest German companies have such a position at board level.Footnote 71
In addition, soft factors are becoming increasingly important in corporate management. Just think of the damage to the company’s reputation, which is one of the tangible economic factors of a company today.Footnote 72 Under the term Corporate Digital Responsibility (CDR), specific responsibilities are developing for the use of AI and other digital innovations.Footnote 73 For example, Deutsche Telekom AG has enacted nine guidelines for responsible AI in a corporate setting. SAP SE established an advisory board for responsible AI consisting of experts from academia, politics, and the industry. These developments, of course, have an important influence on the overall knowledge attribution within the company and a corporate group. AI and Big Data make information available faster and facilitate the decision-making process at board level. Therefore, the management board must examine whether the absence of any AI application in the information gathering and decision-making process is in the best interest of a company. However, a duty to use AI applications only exists in exceptional cases and depends on the market standard in the respective industry. The greater the amount of data to be managed and the more complex and calculation-extensive the decisions in question, the more likely it is that the management board will be obliged to use AI.Footnote 74
3. Business Judgement Rule
This point is closely connected with the application of the business judgement rule as a safe harbour for AI use. Under the general concept of the business judgement rule that is well-known in many jurisdictionsFootnote 75, as it is in Germany according to section 93(1)(2) AktG, a director cannot be held liable for an entrepreneurial decision if there is no conflict of interest and she had good reason to assume she was acting based on adequate information and for the benefit of the company.
a. Adequate Information
The requirement of adequate information depends significantly on the ability to gather and analyse information. Taking into account all the circumstances of the specific individual case, the board of directors has a considerable amount of leeway to judge which information is to be obtained from an economic point of view in the time available and to be included in the decision-making process. Neither a comprehensive nor the best possible, but only an appropriate information basis is necessary.Footnote 76 In addition, the appropriateness is to be assessed from the subjective perspective of the board members (‘could reasonably assume’), so that a court is effectively prevented during the subsequent review from substituting its own understanding of appropriateness for the subjective assessment of the decision-maker.Footnote 77 In the context of litigation, a plausibility check based on justifiability is decisive.Footnote 78
In general, the type, size, purpose, and organization of the company as well as the availability of a functional AI and the data required for operation are relevant for answering the question of the extent to which AI must be used in the context of the decision-making preparation based on information. The cost of the AI system and the proportionality of the information procurement must also be taken into account.Footnote 79 If there is a great amount of data to be managed and a complex and calculation-intensive decision to be made, AI and Big Data applications are of major importance and the members of the management board will hardly be able to justify not using AI.Footnote 80 Conversely, the use of AI to obtain information is definitely not objectionable.Footnote 81
b. Benefit of the Company
Furthermore, the board of directors must reasonably assume to act in the best interest of the company when using AI. This criterion is to be assessed from an ex ante perspective, not ex post.Footnote 82 According to the mixed-subjective standard, it depends largely on the concrete perception of the acting board members at the time of the entrepreneurial decision.Footnote 83 In principle, the board of directors is free to organize the operation of the company according to its own ideas, as long as it stays within the limits of the best interest of the corporationFootnote 84 that are informed solely by the existence and the long-term and sustainable profitability of the company.Footnote 85 Only when the board members act in a grossly negligent manner or take irresponsible risks do they act outside the company’s best interest.Footnote 86 Taking all these aspects into account, the criterion of acceptability proves to be a suitable benchmark.Footnote 87
In the specific decision-making process, all advantages and disadvantages of using or delegating the decision to use AI applications must be included and carefully weighed against one another for the benefit of the company. In this context, however, it cannot simply be seen as unacceptable and contrary to the welfare of the company that the decisions made by or with the support of AI can no longer be understood from a purely human perspective.Footnote 88 On the one hand, human decisions that require a certain originality and creativity cannot always be traced down to the last detail. On the other hand, one of the major potentials of AI is to harness particularly creative and original ideas in the area of corporate management. AI can, therefore, be used as long as its use is not associated with unacceptable risks. The business judgement rule allows the management board to consciously take at least justifiable risks in the best interest of the company.
However, the management board may also conclude that applying AI is just too much of a risk for the existence or the profitability of the firm and therefore may refrain from it without taking a liability risk under section 93(1)(2) AktG.Footnote 89 The prerequisite for this is that the board performs a conscious act of decision-making.Footnote 90 Otherwise, acting in good faith for the benefit of the company is ruled out a priori. This decision can also consist of a conscious toleration or omission.Footnote 91 The same applies to intuitive action,Footnote 92 even if in this case the other requirements of section 93(1)(2) AktG must be subjected to a particularly thorough examination.Footnote 93 Furthermore, in addition to the action taken, there must have been another alternative,Footnote 94 even if only to omit the action taken. Even if the decision makers submit themselves to an actual or supposed necessity,Footnote 95 they could at least hypothetically have omitted the action. Apart from that, the decision does not need to manifest itself in a formal act of forming a will; in particular, a resolution by the collective body is not a prerequisite. Conversely, with a view to a later (judicial) dispute, it makes sense to sufficiently document the decision.Footnote 96
c. Freedom from Conflicts of Interest
The executive board must make the decision for or against the use of AI free of extraneous influences and special interests.Footnote 97 The business judgement rule does not apply if the board members are not solely guided by the points mentioned above, but rather pursue other, namely self-interested goals. If the use of AI is not based on inappropriate interests and the board of directors has not influenced the parameters specified for the AI in a self-interested manner, the use of AI applications can contribute to a reduction of transaction costs from an economic point of view and mitigate the principle-agent-conflict, as the interest of the firm will be aligned with decisions made by AI.Footnote 98 That is, AI can make the decision-making process (more) objective.Footnote 99 However, in order to achieve an actually objective result, the quality of the data used is decisive. If the data set itself is characterized by discriminatory or incorrect information, the result will also suffer from those weaknesses (‘garbage in – garbage out’). Moreover, if the management board is in charge of developing AI applications inside the firm, it may have an interest in choosing experts and technology designs that favor its own benefit rather than the best interest of the company. This development could aggravate the principle-agent-conflict within the large public firm.Footnote 100
IV. Supervisory Board
For this reason, it will also be of fundamental importance in the future to have an institutional monitoring body in the form of the supervisory board, which enforces the interests of the company as an internal corporate governance system. With regard to the monitoring function, there is a distinction to be made as to whether the supervisory board makes use of AI itself while monitoring and advising the management of the company, or whether the supervisory board is monitoring and advising with regard to the use of AI by the management board.
1. Use of AI by the Supervisory Board Itself
As the members of the management board and of the supervisory board have to comply with the same basic standards of care and responsibility under sections 116(1) and 93(1)(1) AktG, the management board’s AI related dutiesFootnote 101 essentially apply to the supervisory board accordingly. If the supervisory board is making an entrepreneurial decision, it can also rely on the business judgement rule.Footnote 102 This is true, for example, for the granting of approval for transactions requiring approval under section 111(4)(2) AktG, with regard to M&A transactions.Footnote 103 Furthermore, the supervisory board may use AI based personality and fitness checks when it appoints and dismisses management board members.Footnote 104 AI applications can help the supervisory board to structure the remuneration of the management board appropriately. They can also be useful for the supervisory board when auditing the accounting and in the compliance area, because they are able to analyze large amounts of data and uncover inconsistencies.Footnote 105
2. Monitoring of the Use of AI by the Management Board
When it comes to the monitoring and advice on the use of AI by the management board, the supervisory board has to fulfil its general monitoring obligation under section 111(1) AktG. The starting point is the reporting from the management board under section 90 AktG.Footnote 106 Namely strategic decisions on the leading guidelines of AI use is part of the intended business policy or at least another fundamental matter regarding the future conduct of the company’s business according to section 90(1)(1) AktG. Furthermore, the usage of certain AI applications may be qualified as transactions that may have a material affect upon the profitability or liquidity of the company under section 90(1)(4) AktG. In this regard, the management board does not need to derive and trace the decision-making process of the AI in detail. Rather, it is sufficient for the management board to report to the supervisory board about the result found and how it specifically used the AI, monitored its functions, and checked the plausibility of the result.Footnote 107 In addition, pursuant to section 90(3) AktG, the supervisory board may require at any time a report from the management board on the affairs of the company, on the company’s legal and business relationships with affiliated enterprises. This report may also deal with the AI related developments on the management board level and in other entities in a corporate group.
Finally, the supervisory board may inspect and examine the books and records of the company according to section 111(2)(1) AktG. It is undisputed that this also includes electronic recordings,Footnote 108 which the supervisory board can examine using AI in the form of a big data analysis.Footnote 109 Conversely, the supervisory board does not need to conduct its own inquiries using its information authority without sufficient cause or in the event of regular and orderly business development.Footnote 110 Contrary to what the literature suggests,Footnote 111 this applies even in the event that the supervisory body has unhindered access to the company’s internal management information system.Footnote 112 The opposing view not only disregards the principle of a trusting cooperation between the management board and the supervisory board, but also surpasses the demands on the supervisory board members in terms of time.Footnote 113
With a view to the monitoring standard, the supervisory board has to assess the management board’s overall strategy as regards AI applications and especially systemic risks that result from the usage of AI in the company. This also comprises the monitoring of the AI-based management and organizational structure of the company.Footnote 114 If it recognizes violations of AI use by the management board, the supervisory board has to intervene using the general means of action. This may start with giving advice to the management board on how to optimize the AI strategy. Furthermore, the supervisory board may establish an approval right with regard to the overall AI-based management structure. In addition, the supervisory board may draw personnel conclusions and install an AI expert on the management board level such as a CIO or CDO.Footnote 115
V. Conclusion
AI is not the end of corporate governance as some authors predicted.Footnote 116 Rather, AI has the potential to change the overall corporate governance system significantly. As this chapter has shown, AI has the potential to improve corporate governance structures, especially when it comes to handling big data sets. At the same time, it poses challenges to the corporate management system, which must be met by carefully adapting the governance framework.Footnote 117 However, currently, there is no need for a strict AI regulation with a specific focus on corporations.Footnote 118 Rather, we see a creeping change from corporate governance to algorithm governance that has the potential to enhance, but also the risks to destabilize the current system. What we really need is the disclosure of information about a company’s practices with regard to AI application, organization, and oversight as well as potentials and risks.Footnote 119 This kind of transparency would help to raise awareness and to enhance the overall algorithm governance system. For that purpose, the already mandatory corporate governance report that many jurisdictions require, such as the US,Footnote 120 the UKFootnote 121 and Germany,Footnote 122 should be supplemented with additional explanations on AI.Footnote 123
In this report, the management board and the supervisory board should report on their overall strategy with regard to the use, organization, and monitoring of AI applications. This specifically relates to the responsibilities, competencies, and protective measures they established to prevent damage to the corporation. In addition, the boards should also be obliged to report on the ethical guidelines for a trustworthy use of AI.Footnote 124 In this regard, they may rely on the proposals drawn up on an international level. Of particular importance in this respect are the principles of the European Commission in its communication on ‘Building Trust in Human-Centric Artificial Intelligence’,Footnote 125 as well as the ‘Principles on Artificial Intelligence’ published by the OECD.Footnote 126 These principles require users to comply with organizational precautions in order to prevent incorrect AI decisions, provide a minimum of technical proficiency, and ensure the preservation of human final decision-making authority. In addition, there is a safeguarding of individual rights, such as privacy, diversity, non-discrimination, fairness, and an orientation of AI to the common good, including sustainability, ecological responsibility, and overall societal and social impact. Even if these principles are not legally binding, a reporting obligation requires the management board and supervisory board to deal with the corresponding questions and to explain how they relate to them. It will make a difference and may lead to improvements if companies and their executives are aware of the importance of these principles in dealing with responsible AI.