2.1 Introduction: Money, Power, and AI
As the authors of this book recognise, money and power are intimately linked. For most consumers, access to banking services, credit, and a saving plan for retirement are necessary – although not sufficient – requirements for a stable, meaningful, and autonomous life. Conversely, financial hardship may have considerable impact on not only the financial but also the emotional well-being of consumers.Footnote 1 There are many causes of financial hardship, including high levels of personal debt, reliance on high-cost credit, lack of access to mainstream banking services, and unexpected circumstances such as unemployment or ill health.Footnote 2 Additionally, consumers are sometimes subject to fraudulent, deceptive, and dishonest practices, which can escalate their financial problems.Footnote 3 Moreover, many consumers find that they lack the time or skills to manage their day-to-day finances, select optimal credit products, or invest effectively for the future.Footnote 4
So where does AIFootnote 5 – the third theme of this book – sit in this schema? The growing capacity of AI and related digital technologies has contributed to a burgeoning interest in the potential for financial technology (‘fintech’) to transform the way in which traditional banking and financial services are provided.Footnote 6 Governments across the globe have promoted the capacity of AI informed fintech to improve market competition and consumer welfare,Footnote 7 and have introduced initiatives to support the development of innovative fintech products within their jurisdictions.Footnote 8 Fintech products are increasingly being used by the financial services sector for internal processes, decision-making, and interactions with customers.Footnote 9
Inside financial institutions, fintech products are assisting in fraud detection, cybersecurity, marketing, and onboarding new clients.Footnote 10 Fintech products are being developed to automate financial services firms’ decisions about lending, creditworthiness, and pricing credit and insurance.Footnote 11 In a consumer-facing role, fintech products are being used for communicating with customers, such as through chatbots (generative or otherwise),Footnote 12 and in providing access to financial products, for example, loanFootnote 13 or credit card online applications.Footnote 14 Fintech products are being developed to provide credit product comparisons for consumers looking for the best deal.Footnote 15 However, the most common forms of consumer-facing fintech are, at the time of writing, financial advice toolsFootnote 16 primarily for investing and budgeting.Footnote 17
Consumer-facing fintech generally, and automated financial advice tools specifically, are often promoted as benefiting consumers by assisting them to make better decisions about credit, savings, and investment, and by providing these services in a manner that is more cost-effective, convenient, and consistent than could be provided by human advisers.Footnote 18 These features undoubtedly hold attractions for consumers. However, in our opinion, the allure of AI, and its financial market equivalent of fintech, should not be allowed to overshadow the limitations of, and the risks of harm inherent in, these technologies. As this book makes clear, whether used by governments or private sector firms, AI and automated decision-making tools raise risks of harm to privacy, efficacy, bias, and perpetrating existing power hierarchies. Albeit on a different scale, consumer-facing fintech, such as automated financial advice tools, carry many of the same kinds of risks, which equally demand regulatory attention and best practice for good governance. There has been little assessment of whether automated financial advice tools are effective in achieving improving the financial well-being of consumers. It is also unclear whether and to what extent such tools are equitable and inclusive, or conversely amplify existing bias or patterns of exclusion in financial services and credit markets.
Some of the potential risks of harm to consumers from automated financial advice tools will be addressed by existing law. However, we argue that there is a need to move past the commercial, and indeed political, promotion of ‘AI’ and ‘fintech’ to understand their specific fields of operation and demystify their scope. This is because the use of AI in this equation is not neutral or without friction. Automated advice tools raise discrete and unique challenges for regulatory oversight, namely opacity, personalisation, and scale. We therefore suggest, drawing on the key principles propounded in AI ethics frameworks, that the effective regulation of automated financial advice tools should require greater transparency about what is being offered to consumers. There should also be a regulatory commitment to ensuring the outputs of such tools are contestable and accountable, having regard to the challenges raised by the technology they utilise.
This chapter explores these issues, beginning with an overview of automated financial advice, focusing on what are currently the most widely available tools, namely ‘robo’ investment advice and budgeting apps. We discuss the risks of harm raised by these uses of AI and related technologies, arising from uncertainty about the quality of the service provided, untrammelled data collection, and the potential for bias, as well as the need for a positive policy focus on the impact of such tools on goals of equity and inclusion. We review the guidance provided by regulators, as well as the gaps and uncertainties in the existing regulatory regimes. We then consider the role of principles of transparency and contestability as preconditions to greater accountability from the firms deploying such tools, and more effective oversight by regulators.
2.2 Aspiration and Application in Consumer-Facing Fintech
The term ‘fintech’ refers to the use of AI and related digital technologies to deliver financial products and services.Footnote 19 The AI used to deliver fintech products may include natural language processing in front-end interfaces to communicate effectively with clients and statistical machine learning models to make predictions that inform financial decision-making. ‘Consumer-facing’ fintech refers to the use of fintech to provide services to consumers, as opposed to use by professional investors, business lenders, or for back-room banking processes. As already noted, perhaps the most prominent form of fintech service offered to consumers, as opposed to informing the internal processes of financial institutions, is automated financial advice, primarily about investing and budgeting.
The aims of most fintech products are to allow services to be delivered at scale, reducing human handling of information, and, in the case of consumer-facing fintech, benefiting consumers. Automated financial advice tools typically purport to offer a low-cost option for financial advice derived from insights from consumer data and statistical analysis and provided through an accessible interface using state-of-the-art processing to identify and respond to consumers’ financial aims. The commonly stated aspiration of governments and regulators in supporting the development of these and other fintech products is to promote innovation and to provide low-cost, reliable, and effective financial services to consumers.Footnote 20 Some fintech providers express aspirations to be more inclusive and empower ordinary people to participate in the financial and banking sectors.Footnote 21
There are undoubted attractions in such aspirations.Footnote 22 The majority of consumers do not seek financial planning advice,Footnote 23 probably because it is perceived as being too expensive.Footnote 24 Yet many consumers find financial matters difficult or confusing. This is due to a combination of factors, including low financial literacy, limits on time, and the impact of behavioural biases on decision-making. In principle, automation should allow financial services providers to lower the cost and improve the consistency of advice,Footnote 25 as well as providing the convenience of an on-demand service.Footnote 26 Additionally, by using consumers’ own data, automated financial advice tools have the potential to be uniquely tailored to those consumers’ individual circumstances.Footnote 27 Indeed, this is one of the premises behind Australia’s consumer data right, which aims to give consumers control over their data to promote innovation and competition in the banking sector.Footnote 28
Currently, the two main kinds of automated financial advice tools are robo-advisers and budgeting apps.Footnote 29 Though these tools will no doubt evolve, they provide a simpler, less personalised service than might be envisaged by the ‘AI’ label commonly attached to them.
2.2.1 Robo-Advisers
Robo-advisersFootnote 30 provide ‘automated financial product advice using algorithms and technology and without the direct involvement of a human adviser’.Footnote 31 In principle, robo-advice might cover automated advice about any topic relevant to financial management, such as budgeting, borrowing, investing, superannuation, retirement planning, and insurance. Currently, most robo-advisers provide automated investment advice and portfolio management.Footnote 32
Typically, robo-advice services begin with consumers answering a questionnaire about their goals, expectations, and aptitude for risk. An investment profile for consumers is derived from this information, based on their goals and capacity to bear risk. An algorithm matches consumers’ profiles with an investment portfolio available through the advisory firm to produce an investment recommendation.Footnote 33 Should a consumer choose to follow the advice and invest in that portfolio, many robo-advisers will also manage the portfolio on an ongoing basis, keeping it within the parameters recommended for the consumer. Consumers generally pay a fee for the service provided by the robo-adviser, often a percentage of the amount invested, with minimum investment amounts required to access the service.
Robo-advice is sometimes described as ‘trading with AI’.Footnote 34 This language might be thought to suggest specialised insights into the stock market uniquely tailored to consumers’ needs and arrived at through sophisticated machine learning models. The practice is more straightforward. At the time of writing, robo-advisers do not rely on state-of-the-art AI technology, such as using neural networks to process data points and make predictions about stock market moves, or link individual profiles to unique investment strategies. As Baker and Dellaert explain, the matching process will be based on ‘a model of how to optimise the fit between the attributes of the financial products available to the consumer and the attributes of the consumers who are using the robo-advisor’.Footnote 35 The robo-adviser will typically build the consumer profile based on the entry questionnaire and match this with an investment strategy established using financial modelling techniques and based on the investment packages already offered by firm. The process will usually have been automated through some form of expert system – a hand coded application of binary rule identified by humans. Ongoing management of the consumer’s portfolio will be done on a similar basis, often using exchange-traded funds (ETFs) that ‘require no or less active portfolio management’.Footnote 36
Unlike human financial advisers, robo-advice tools typically do not provide budgeting or financial management advice to consumers.Footnote 37 Their recommendations are limited to the kinds of investment that will match consumers’ investment profiles. Robo investment advisers do not provide advice on matters of tax, superannuation, asset management, or savings, and they do not yet have the capacity to provide this more nuanced advice.Footnote 38 Sometimes robo-advice tools are used in conjunction with human financial advisers who will provide a broader suite of advice. Automated budgeting tools are also increasingly available on the market.
2.2.2 Budgeting Tools
Budgeting tools allow consumers to keep track of their spending by categorising expenses and providing dashboard-style visualisations of spending and saving.Footnote 39 Some banks offer budgeting tools to clients, and there are many independent service providers. Some neo-banks have, additionally, consolidated their brand around their in-built budgeting tools.Footnote 40
As with robo-advisers, automated budgeting tools collect information about consumers through an online questionnaire. Budgeting tools also typically require consumers to provide access to their bank accounts, in order to scrape transaction data from that account,Footnote 41 or alternatively rely on data-sharing arrangements.Footnote 42 Based on this information, the services provided by budgeting tools include categorising and keeping track of spending; providing recommendations about budgeting; and monitoring savings.Footnote 43 In some cases, the tools will transfer funds matching consumers’ savings goals to a specific account, provide bill reminders, make bill payments, monitor information about credit scores, suggest potential savings through various cost-cutting measures or identifying alternative service providers.Footnote 44 Additionally, automated budgeting tools may provide articles and opinion pieces about financial matters, such as crypto, non-fungible tokens (NFTs), or budgeting.Footnote 45 Some budgeting tools have a credit card option,Footnote 46 and at least one is linked to a ‘buy now-pay later’ provider.Footnote 47
Automated budgeting tools often describe their service as relying on AI.Footnote 48 Again, however, they do not, as might have been expected from this terminology, typically provide a personalised plan for saving derived from insights from multiple data points relating to consumers. They may use some form of natural language processing to identify spending items. Primarily, somewhat like robo-advisers, they rely on predetermined, human-coded rules for categorising spending and presenting savings. Most budgeting tools are free, although some charge for a premium service. This means that the tools are funded in other, more indirect ways, including through selling targeted advertisements on the app, fees for referrals, commissions for third-party products sold on the app, the sale of data (usually aggregated), and in some cases a percentage of the savings where a lower cost service or provider is identified for consumers.Footnote 49
2.3 Regulation and Risk in Consumer-Facing Fintech
This brief survey of available automated financial advice tools aimed at consumers suggests that they are operating with a fairly narrowly defined scope and using relatively straightforward digital processes. The tools may evolve into the future to make greater use of such state-of-the-art AI, such as using generative AI for providing general advice to consumers. However, even in their current form, the tools pose risks of harm to consumers that are more than fanciful, and similar to those raised by AI generally. The risks arising from AI are becoming increasingly well recognised, including poor efficacy, eroding privacy, data profiling, and bias and discrimination.Footnote 50 These risks are also inherent in consumer-facing fintech and automated financial advice tools. Moreover, we suggest they are only partially addressed by existing law. While financial services law commonly imposes robust obligations on those providing financial advice, those obligations may not squarely address the issues arising from the automated character of the advice, particularly issues of bias. Additionally, some automated advice tools, such as budgeting apps, may fall outside of these regimes. It is therefore worth considering these issues in more detail.
2.3.1 Quality of Performance
One of the notable features of automated financial advice is that consumers are unlikely to be able to scrutinise the quality of the service provided. Consumers will typically turn to automated advice tools because they lack skills in the relevant area, be it investing or budgeting. This lack of expertise makes it difficult for them to assess the quality of the advice they receive.Footnote 51 There is not a lot of information for consumers in selecting between different tools, as compared to standard consumer goods. While some rankings of automated financial advice tools have emerged, these often focus on ease of use – the interface, syncing with bank data, fees charged – rather than the quality of the advice provided,Footnote 52 and some ranking reviews include sponsored content.Footnote 53 Accordingly, at least at this point in time, automated financial advice tools may be very much a credence good – for which assertions of quality are all that is available to consumers. Unless the advice provided by the tools is patently bad, it may not be apparent that the poor quality of the automated process is to blame, as opposed to other external factors. Indeed, without a point of comparison, which is effectively excluded by the personalised nature of the service, it may be difficult for consumers to identify poor quality advice at all.
There is currently little academic research on the extent to which consumers are well-served by automated financial advice tools, particularly when weighted against possible costs in terms of data-sharing.Footnote 54 There have been a number of concerns raised in the literature about how well the tools may function. Although robo-advisers may operate in a manner that is more objective and consistent than human financial advisers,Footnote 55 this does not mean they operate free from the influence of commissions, which may be coded into their advisory process. It is unclear to what extent the recommendations provided by automated financial advice tools are personalised to consumers, as opposed to being generic or based on broad target groupings. Additionally, concerns have been raised about the relatively small number of investment options actually held by robo-investment advisers.Footnote 56 While automated budgeting tools may assist consumers by providing an accessible, straightforward, and visual way of monitoring spending,Footnote 57 this does not necessarily translate into long-term savingsFootnote 58 or improved financial literacy.Footnote 59 It is further possible that one of the main functions of at least some budgeting apps is to obtain consumers’ attention in order to market other financial services, such as credit cards, as well as the opportunity for the providers to profit from the use or sale of consumer data for marketing and data analytics.Footnote 60
In consumer transactions – particularly those that are complex, hard for consumers to monitor, or which carry the risk of high impact harms – reliance is usually placed on regulators to take ‘ex ante’ measures for ensuring that the products supplied to consumers are acceptably safe and reliable. Financial services regulators in jurisdictions such as Australia, the United Kingdom, the European Union, and the United States of America have responded to the rise of robo-advisers by affirming that the existing regulatory regime applies to this form of advice.Footnote 61 Financial services providers are typically subject to an array of statutory conduct obligations, which overlap, albeit imperfectly, with their fiduciary duties arising under general law.Footnote 62 These statutory duties require firms to manage conflicts of interest,Footnote 63 act in their clients, best interests,Footnote 64 ensure the suitability of the advice provided,Footnote 65 and take reasonable care in proving the advice.Footnote 66 These obligations should, in principle, assist in addressing concerns about the quality of the service provided by robo-advisers.Footnote 67 Nonetheless, some uncertainties remain, including, for example, whether the category-based approach deployed by robo-advisers fits with statutory requirements for personalised advice that is suitable for the individual consumer.Footnote 68
Regulators have additionally stated they expect firms providing robo-advice to have a ‘human in the loop’, in the sense of a person with ‘an understanding of the technology and algorithms used to provide digital advice’ and who are ‘able to review the digital advice generated by algorithms’.Footnote 69 Recommendations for a human overseeing the automated advice leave open the question of what that human should be monitoring – is it merely compliance with existing law applying to the giving of advice, or should there be other considerations taken into account, arising from the automated character of the advice?
In terms of the issue of automation, regulators have focused on the informational aspects of the process. They have emphasised that firms providing automated advice should give consideration to the way in which the information on which the advice is based is collected from consumers so as to ensure it is accurate and relevant, especially because there is no human intermediary to pick up possible discrepancies or errors. Regulators have also advised firms to take care in the way the advice is framed and explained, given the potential for misunderstanding and error in an automated process.Footnote 70 Issues of information gathering and reporting are important but they are only part of the challenge presented by automation for consumer protection law and policy. Moreover, they tend to represent a very individualised response to the risks of harm to consumers relying on automated financial advice, focusing on what consumers need to provide and understand, as opposed to the substance of the process through which advice is provided.
Notably, there is typically no specific law or regulatory guidance that applies to automated budgeting tools, which do not involve financial services. These tools will be subject to general consumer protection regimes, which typically prohibit misleading conduct, and mandate reasonable care and skill in the provision of services.Footnote 71 Uncertainties about the application of existing law to automated advice give rise to the question of whether other kinds of regulatory mechanisms are required to complement sector-specific or general consumer protection law in order to address the risks of harms that are specific to the use of AI and related digital technologies. In answering this question, we suggest that, at minimum, the risks around data collection and bias need to be considered.
2.3.2 The Data/Service Trade-Off
Automated financial advice tools operate on the core premise that consumers necessarily hand over data to obtain the service. A firm may be using consumer data for the dual purposes of providing advice and making a return for itself, such as through promoting other products for a commission on sales, up-selling add-on products for a fee, or on-selling the data for profit.Footnote 72 This behaviour is particularly apparent in the case of budgeting apps, which are typically free. As already noted, these services earn income through in-app advertising, fees, and commissions for referrals and potentially through selling aggregated consumer data, as well as targeted advertising. Notably, the privacy terms of automated budgeting tools commonly allow the collection of a wide range of consumer data and the use of that data for a number of purposes, including improving the service and related company group services, marketing, and, in aggregated form, sharing with third parties.Footnote 73
Data protection and privacy law impose obligations on the collection and processing of data.Footnote 74 However, the key requirements of notice and consent typically found under these regimes may easily be met in automated advice contexts because the exchange is at the heart of the transaction. Consumers provide their data in order to obtain the advice they need. While consumers may be unaware of how much information they are handing over, there is some evidence that consumers, particularly younger consumers, are prepared to trade data for cheaper, more efficient financial services.Footnote 75 However, to the extent consumers are ill or under-informed about the quality of the service being provided by automated advice tools, the data-for-service bargain may look thinner than they might have at first thought.Footnote 76 Under the fintech service model, consumers provide personal data to obtain a personalised and cost-effective service but have few objective measures as to the quality of what is actually being provided.
2.3.3 Bias and Exclusion
In discussing legal and regulatory responses to the growing influence of AI and related technologies, much attention has rightly been given to their role in amplifying surveillance, bias and discrimination.Footnote 77 The technologies may use personal data to profile consumers, which in turn allows firms to differentiate between different consumers and groups with a high degree of precision, leading to risks of harmful manifestations of targeted advertising, or differential pricing.Footnote 78 Bias and error are particular concerns in firms’ use of AI technologies for decision-making, including in decisions about lending,Footnote 79 credit,Footnote 80 or insurance.Footnote 81 Automated lending decisions and credit scoring might be more objective than human-made decisions and might benefit cohorts that have previously been disadvantaged by human prejudice.Footnote 82 But there is no guarantee this is the case, and indeed the outcomes may be worse for these groups. Differential treatment of already disadvantaged groups – such as minoritiy or low-income cohorts – may already be embedded in the practices and processes of the institution. To the extent this data is used in credit-scoring models or to inform automated decisions, historical unequal treatment may be amplifiedFootnote 83 or distorted.Footnote 84 Unequal treatment may, moreover, be difficult to identify or address where it is based, not directly on protected attributes, but on proxies for those attributes found in the training data.Footnote 85
Bias may also be embedded in automated advice tools used by consumers. For example, a robo-advice tool might exhibit bias by treating a person who takes time off work for childrearing as going through a period of precarious employment or being unable to hold down steady employment. An automated budgeting tool might exhibit bias by characterising products for menstruation as discretionary spending, instead of essentials. There are complex technical and policy decisions to be made in identifying and responding to the risks of unacceptable bias in automated financial advice tools.Footnote 86 Consumer protection and financial services law have not traditionally have not been central to this process, which is primarily the domain of human rights law. However, decisions based on historical prejudice may be unconscionable or unfair, contrary to consumer protection law. Certainly, in the United States, the Federal Trade Commission has indicated that discriminatory algorithms would fall foul of its jurisdiction to respond to unfair business practices.Footnote 87
A related issue concerns financial exclusion. Fintech innovators and government initiatives to encourage innovation often refer to an aspiration of promoting inclusion and overcoming exclusion.Footnote 88 There are few findings on the extent to which this aspiration is achievable. There are plausible reasons why automated advice tools may fail to assist, or assist adequately, consumers already excluded from mainstream financial or banking services, or consumers who have had less engagement with the mainstream banking system, such as where they are ‘not accessing or using financial services in a mainstream market in a way that is appropriate to their needs’.Footnote 89 Financially excluded consumers might not be offered meaningfully relevant advice tools because there is no relevant or useful data about them or because they are unlikely to be sufficiently profitable for financial services providers to develop products suited to them. These consumers may also find that the models on which the advisory tools are based are inaccurate when applied to their circumstances.
For example, investment tools may be of little value to consumers struggling to make ends meet and with no savings to invest. The models used by automated budgeting tools may have a poor fit with consumers living on very low incomes and for whom cutting back on discretionary spending is not an option available. In these circumstances, the tools will do little to improve equity, leaving unrepresented groups without advice, or relevantly personalised advice. Moreover, there may be a real risk of harm. Inept recommendations may subject consumers to harms of financial over-commitment or lull inexperienced consumers into a false sense of financial security. At a more systematic level, the availability of automated advice tools for improving financial well-being may feed into longstanding liberal rhetoric about the value of individual responsibility, as opposed to government initiatives for improving overall financial well-being.
It is possible to envisage services that would be useful to financially excluded consumers or consumers experiencing financial harshi, such as for example, advice on affordable loans and other services.Footnote 90 Emma Leong and Jodi Gardner point to proposed uses of Open Banking in the United Kingdom to provide tools that assist with better managing fluctuating incomes.Footnote 91 The United Kingdom Financial Conduct Authority notes there are some apps on the market providing legal aid and welfare support advice.Footnote 92 These kinds of initiatives are likely to require a deliberate policy decision to initiate rather than arising ‘naturally’ in the market.Footnote 93 This is because there would seem to be little commercial incentive for firms to invest in tools specifically tailored to low-income or otherwise marginalised consumers from whom there is little likelihood of ongoing lucrative return to the firm, without government support.
2.4 New Regulatory Responses to the Risks of Automated Financial Advice
Automated financial advice tools illustrate the continuing uncertainties in regulating consumer-facing fintech and AI informed consumer products. We have seen that regulators will need to adapt existing regimes to the new ways in which services are being provided to consumers, which requires attention not only to the risks in providing advice but in the automation of advice. We further suggest that regulators need to be cognisant of the ways in which the AI and digital technologies informing the tools raise unique challenges for regulation. Opacity is a key concern in any regulatory response to making AI systems more accountable.Footnote 94 Automated financial advice tools may not currently rely on sophisticated AI, in the sense of deep learning or neural networks. Nonetheless, they are for commercial (if not technical) reasons highly opaque as to the technology being utilised and how recommendations are reached. Their very purpose is to provide advice without significant human intervention and at scale, which may amplify harms of bias or error in the system.Footnote 95 The tools typically purport to provide output on factors personal to the consumer, which may make it difficult to determine whether an adverse outcome is unfortunate, a systematic error or failure of a legal duty.Footnote 96
One response to navigating the challenges of regulating consumer-facing fintech is provided by the principles of ethical AI.Footnote 97 Principles of AI ethics are sometimes criticised as too general to be useful.Footnote 98 The principles operate as a form of soft law – they are not legally binding and must necessarily be supplemented by legal rules.Footnote 99 However, principles of AI ethics may be effective when operationalised to apply to specific contexts and when used in conjunction with other forms of regulation. The principles provide the preconditions for responsible use of AI and automated decision tools by firms. They also provide an indication of what regulators should demand from firms deploying such technology to reduce the risk of harm to consumers.Footnote 100 While there are various formulations of the principles of ethical AI,Footnote 101 key features typically include requirements for AI to be transparent and explainable,Footnote 102 along with mechanisms for ensuring accountabilityFootnote 103 and – at least in the Australian government’s principlesFootnote 104 – contesting adverse outcomes.Footnote 105
2.4.1 Transparency and Explanations
Principles of ethical AI typically require the use of such technologies to be transparent.Footnote 106 A starting place for transparency is to inform consumers when AI is being used in an interaction with them. Applied to automated financial advice tools, transparency must mean more than informing consumers that AI is being used to provide advice. Consumers choosing to turn to a robo-adviser or budgeting app will usually be aware of the automated character of the advice. Consumers also require transparency in the kind of technology being used to provide that advice: i.e. is it based in machine learning or a hand coded expert system. Additionally, a principle of transparency would require firms to inform consumers clearly about the scope of the service that is being provided, including the limitations of the technology in terms of personalised or expert advice.Footnote 107 If the advice provided is generalised to broadly defined categories of consumers, then this should be made clear, to counter consumers’ expectations of a unique and personal experience.
To the extent that consumers overestimate the capacities of fintech, transparency in way the advice is produced is important to ground expectations and allow scrutiny of the veracity of claims made about it. For regulators, transparency is key to overseeing the performance of the tools. Transparency is key to allowing bias or distortions in the scope of advice to be identified, scrutinised and, in some instances, rectified. Regulation can support the imperative for firms to take these ethical demands seriously, including by treating them as necessary elements of statutory obligations of suitability or best interests, and essential to ensuring that claims about the operation of the product are not misleading. For example, the process of automation, and its claims to objectivity and consistency, may make consumers overconfident about the advice and more likely to act on it.Footnote 108 This might suggest an obligation on firms to be scrupulously clear on the limits of what is able to provided by automated advice tools, and of the insights that can be derived from the technology being utilised.Footnote 109
Transparency in ethical AI is closely associated with initiatives in AI ‘explanations’ or ‘explainability’.Footnote 110 Explanations in this sense do not lie in the details of the code. Rather, explainable AI considers the kind and degree of information that should be provided in assisting the various stakeholders in the decision or recommendation process to understand why decisions were taken or the factors that were significant in reaching a recommendation.Footnote 111 Explainable AI aims to provide greater transparency into the basis for automated decisions, predictions, and recommendations.Footnote 112 There are different ways in which explanations may be provided, and indeed the field of study in computer science is still developing.Footnote 113 Possibilities include the use of counterfactuals, feature disclosure scores, weightings of influential factors, or a preference for simpler models where high levels of accuracy are not as imperative.Footnote 114 Overall, however, a requirement for explanations would assist in scrutinising the basis of the recommendations produced through automated financial advice tools.
For lawyers, suggesting that a core element in the regulation of automated financial advice tools should focus on requirements related to transparency/explanations may seem a surprising aspiration.Footnote 115 Disclosure as a consumer protection strategy has increasingly fallen out of favour, particularly in the regulation of financial services and credit. The insights into decision-making from behavioural psychology have shown that mere information disclosure does not lead to better decisions by consumers. Consumers are subject to bounded rationality which means they rely on rules of thumb, heuristics, and behavioural bias rather than information.Footnote 116 In this light, it may be thought that any demand for greater transparency in automated financial advice tools may be of marginal utility. However, in a consumer protection context, consumers’ interests are substantially protected by regulators, and therefore transparency and explanations are relevant to both consumers seeking to protect their interests, and regulators charged with overseeing the market. Explanations should be provided in a form that is meaningful to the recipient.Footnote 117 This means that the detail and technicality of the information provided may need to differ between consumers and regulators.Footnote 118 In other words, the requirements should be scaled according to who is receiving the explanation.
2.4.2 Accountability
Principles of AI ethics typically require mechanisms for ensuring firms are accountable for the operation of the technologies.Footnote 119 To have impact, accountability will require more than allocating responsibility for supervising the AI to a person. There is little worth in having a ‘human in the loop’ in circumstances where the design of the AI or automated tool means it is difficult for that person genuinely to oversee, interrogate or control the tool.Footnote 120 Accountability for automated financial advice tools should therefore require a firm to implement systematic processes for reviewing the operations and performance of the tools.Footnote 121 A commitment to accountability may therefore require firms to have processes for scrutinising the data on which the AI is trained, its ongoing use, and its outputs.Footnote 122 A model for the kind of robust approach required might be found in the audits increasingly recommended for AI used in public sector decision-making.Footnote 123 Such processes should aim to ensure the veracity of the tools and are a critical element in addressing and redressing concerns about bias, equity, and inclusion.Footnote 124
2.4.3 Contestability
There is little utility in requiring transparency and accountability in AI systems if there is no mechanism available to those affected by an AI or automated decision for acting to challenge an outcome that is erroneous, discriminatory, or otherwise flawed. Some formulations of AI ethical principles respond to this issue by requiring processes for contesting adverse outcomes.Footnote 125 While accountability processes should aim to be proactive in preventing these kinds of problems, contestability is a mechanism for individuals, advocates, or regulators to respond to harms that do occur.
Lyons et al. make the point that little is currently known about ‘what contestability in relation to algorithmic decisions entails, and whether the same processes used to contest human decisions … are suitable for algorithmic decision-making’.Footnote 126 Contestability for automated decisions may not be able simply to follow existing mechanisms for dealing with individual complaints or concerns. The models informing AI may be complex and opaque, thus creating challenges for review by subject domain experts who may nonetheless be unfamiliar with the technology. Additionally, scale creates a challenge. This is because one of the benefits of automated decision-making is that it can operate on a scale that is not possible for human decision-makers or advisers, and yet this makes processes for individual review potentially unmanageable.
The inquiry into what contestability requires may be different in the context of automated financial advice tools, as opposed to public sector use of automated decision-making. Consumers using automated advice tools will not be challenging a decision made about their rights to access public resources or benefits. Rather they will be challenging the advice given to them, the consistency of this advice with any representations about the tool, or compliance with any applicable regulatory regimes. Nonetheless, complexity and scale remain significant challenges. It is possible that the field of consumer protection law may have insights given its focus on both legal rights and structural mechanisms for protecting consumers’ interests in circumstances where there are considerable imbalances in power, resources, and information, which in some ways mirrors concerns around AI contestability. For example, in this context of automated financial advice tools, contestability for poor outcomes may come through the oversight provided by ombudsmen and regulators, rather than traditional litigation. These inquiries have the capacity to look at systemic errors, thus bringing expertise and capacity to review processes through which advice or recommendations are provided, rather than necessarily reopening every decision.
2.5 Conclusion
The triad of money, power, and AI collide in fintech innovation, which sees public and private sector support for using AI, along with blockchain and big data, in the delivery of financial services. Currently, the most prominent forms of fintech available to consumers are automated advice tools for investing and budgeting. These tools offer advantages of low cost, convenient and consistent advice on matters consumers often find difficult. Without discounting these attractions, we have argued that the oft-stated aspiration of automated advice financial tools in democratising personal finance should not distract attention from their potential to provide only a marginally useful service, while extracting consumer data and perpetuating the exclusion of some consumer cohorts from adequate access to credit, advice and banking. From this perspective, consumer-facing fintech provides a exemplary example of the need for careful regulatory attention being provided to the use of AI and related technologies even in seemingly low-risk contexts. Fintech tools that hold out to consumers a promise of expertise and assistance should genuinely be fit for the purpose. Consumers are unlikely to be able to monitor this quality themselves. As such, robust standards of transparency, accountability, and contestability that facilitate good governance and allow adequate regulatory oversight are crucial, even for these modest applications of AI.