Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-27T06:57:52.589Z Has data issue: false hasContentIssue false

The Risk-Based Approach of the European Union’s Proposed Artificial Intelligence Regulation: Some Comments from a Tort Law Perspective

Published online by Cambridge University Press:  05 December 2022

Johanna Chamberlain*
Affiliation:
Postdoctoral researcher within WASP-HS project “AI and the Financial Markets: Accountability and Risk Management with Legal Tools”, Commercial Law, Department of Business Studies, Uppsala University, Uppsala, Sweden.
Rights & Permissions [Opens in a new window]

Abstract

How can tort law contribute to a better understanding of the risk-based approach in the European Union’s (EU) Artificial Intelligence Act proposal and evolving liability regime? In a new legal area of intense development, it is pivotal to make the best use possible of existing regulation and legal knowledge. The main objective of this article is thus to investigate the relationship between traditional tort law principles, with a focus on risk assessments, and the developing legislation on artificial intelligence (AI) in the EU. The article offers a critical analysis and evaluation from a tort law perspective of the risk-based approach in the proposed AI Act and the European Parliament resolution on a civil liability regime for AI, with comparisons also to the proposal for a revised and AI-adapted product liability directive and the recently proposed directive on civil liability for AI. The discussion leads to the illumination of both challenges and possibilities in the interplay between AI, tort law and the concept of risk, displaying the large potential of tort law as a tool for handling rising AI issues.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

I. Introduction

Risk is a concept that has become central in a developing and extensive field of European Union (EU) legislation, namely the proposed EU regulation on artificial intelligence (AI). In the risk-based approach of the AI Act proposal,Footnote 1 a “pyramid of criticality” divides AI-related risks into four categories: minimal risk, limited risk, high risk and unacceptable risk. At the same time, risk is not a legal concept, and a number of questions arise regarding its meaning in a legal context.Footnote 2 Legal arguments have been presented on a general level when it comes to regulating risks,Footnote 3 perhaps most notably by SunsteinFootnote 4 and Jarvis Thomson,Footnote 5 but much remains to be said about the handling of risk in specific legal areas. Against this background, this article will focus on the risk discourse in the chosen area of tort law. The starting point is the risk-based approach of the AI Act proposal, but as this proposal does not include rules on liability, the analysis will go on to cover other legal instruments containing proposed rules on liability for AI.

First, some central risk elements within traditional tort law principles and assessments will be discussed, with examples primarily from Swedish tort law, followed by an overview of the risk-based approach of the proposed AI Act. This risk structure will then be analysed critically from a tort law perspective, after which the discussion continues to the European Parliament’s (EP) resolution on liability for AI,Footnote 6 with references to the proposed revision of the product liability directiveFootnote 7 and the proposed adaptation of civil liability rules to AI.Footnote 8 Within this liability theme, certain parallels and differences are identified between the suggested regime on civil liability for AI systems and existing EU regulation, namely liability for data protection breaches in the GDPR.Footnote 9 In a final section, some conclusive remarks on the challenges of current legal developments in the field of AI and tort law are put forward.

II. Risk in tort law: fault and negligence, strict liability and assumption of risk

The first substantive area of law that comes to mind in relationship to the concept of risk is probably insurance law. In insurance law, risks are typically formulated within the specific clauses of the insurance contract. The insurance contract regulates the surrounding prerequisites for a binding contract concerning potential future loss, with certain clauses specifying risks covered (eg fire, water damage, burglary) and sometimes closer descriptions of these risks (burglary is not covered when the thief does not have to force entry).Footnote 10 As insurance contracts must be as precise as possible when it comes to regulating risks – and the actual risk as the base for the cost for insurance is calculated by statisticians – it becomes contrary to the nature of insurance law to present more general techniques or arguments relating to risk. In the search for legal arguments relating to risk, we will therefore continue to a nearby area of law.

In light of the specific risk regulation in insurance law, tort law may appear to be further from risk than insurance law. However, tort law instead offers a different and more general base for a legal analysis of the concept of risk. In tort law, the evaluation of risk is central both for strict and fault-based liability – a statement that will now be developed in relation to each category, starting with fault-based liability and continuing with strict liability.

Within fault-based liability, risk is an important part of the assessment of negligence. It is common to divide this assessment into two parts, where the first part encompasses a four-step circumstantial inventory. The risk for damage is the first step.Footnote 11 The next step is the extent of potential damage, followed by alternative actions and, lastly, the possibility (for the potential tortfeasor) to realise the circumstances under the earlier steps. In the second part of the assessment, the four steps of part 1 are weighed together – with the parameters risk plus potential damage on the one hand and alternatives plus insight on the other. Depending on the result of this balancing act, in each specific case a conclusion will be reached regarding the existence of negligence on the part of the potential tortfeasor.Footnote 12

Within the negligence assessment, issues concerning risk are relevant primarily when the risk for damage is analysed. Examples of such risks are typically related to the surroundings of a certain action: when a person kicks a ball in a park, was there a risk of anyone outside the game being hit? When someone lets their cat stroll freely, is there a risk of the cat breaking into a neighbour’s home and contaminating their carpet? When you are sawing a branch off a tree in your garden, is there a risk that it could fall and crush someone else’s bike? And so on.Footnote 13 The notable thing about these examples of risk issues is that they tend to be highly circumstantial; the assessment of minimal/limited/high/unacceptable risk will – despite certain objective standardsFootnote 14 – vary from case to case depending on every fact that is discernible. By comparison, an obvious challenge within the proposed AI regulation is that risk categories will be determined beforehand. Does such a system allow for a fair risk assessment, with consideration of individual factors? We will return to this question within the context of the suggested regulation.

Another area of tort law where risk evaluation is central is when strict liability is involved. Traditionally, strict liability has been imposed for “dangerous enterprises” such as industries and different forms of transport, but also for military shooting exercises and dog owners.Footnote 15 Where strict liability is applicable, fault is no longer a prerequisite for liability. Strict liability regulations define who is to be considered responsible when damage occurs – most often the owner of a company, dog, etc. If the risk brought about by, for example, a new technology materialises and damage thus is caused within an area where strict liability applies, the victim will not have to prove the occurrence of wrongdoing or the causal link between wrongdoing and the loss suffered.Footnote 16

Strict liability signals that great care is required when actions are taken within areas where the risks are significant, thereby demonstrating the preventative function in tort law.Footnote 17 This is where the arguments concerning risk come in. Imposing strict liability is a means of spreading risk and cost, placing responsibility on the actors with most control and knowledge and solving complex issues of causation.Footnote 18 There is an ongoing discussion in tort law regarding which activities motivate regulation with strict liability. For plaintiffs, strict liability naturally carries many benefits, enabling compensation in a variety of cases without demanding proof of negligence. However, there is also a societal interest of companies and individuals undertaking risky enterprises that we see as necessary to be performed. If the far-reaching form of strict liability is imposed too liberally, we run the risk of standing without suppliers of these risky yet necessary functions. In this sense, the issue of strict liability, too, boils down to a balancing assessment of risks and costs. How risky is the activity in question? Risky enough to be subject to a regulation imposing strict liability? Can we afford to lose suppliers following such an intervention?Footnote 19

A third relevant issue in tort law is how assumption of risk impacts the damages assessment. If a person has agreed to undertake an activity that may lead to damages, such as taking part in a football game, the assumption of risk is considered to limit the prospects of damages – up to a certain level.Footnote 20 For example, in Swedish law it is possible to consent to assault of the lower degree but not of the standard degree.Footnote 21 The discussion of risk is here focused on what the assumption of risk has encompassed in the individual case and if the surrounding actions and causes of damage can be said to have gone beyond the consent of the damaged. Once again, the assessment of how assumption of risk impacts damages is dependent on the category of harm and of the detailed circumstances of an individual case, something that can be discussed in relation to the predestined risk categories of the proposed AI regulation. Will assumption of risk even be possible when it comes to AI services in the EU, considering the proposed prohibitions and restrictions?

As can now be seen, several central areas of tort law – the negligence assessment, strict liability and assumption of risk – contain established risk assessments and may be useful when it comes to understanding the risk-based approach of regulating AI. In the following sections, these possible connections shall be examined in closer detail and certain challenges identified.

III. The risk-based approach of the proposed AI Act

The risk-based approach in the proposed EU regulation on AI is new at the EU level but has parallels in already existing legal instruments in the AI area.Footnote 22 These developments suggest that a risk-based approach may become the global norm for regulating AI.Footnote 23 As mentioned above, the proposed EU regulation differentiates between four levels of risk in a “pyramid of criticality”.Footnote 24 At the bottom tier is the vast majority of all existing AI systems.Footnote 25 They will be classified as low risk, thus falling outside the scope of the regulation. A large number of systems will have their place in the next level of the pyramid, “limited risk”, where the only obligations will be to supply certain information to users. A smaller number of systems will land in the next level up, “high risk”, where various restrictions apply. And at the top of the pyramid can be found the prohibited AI systems with “unacceptable risks”. Although the regulation has not yet been passed, there is a pressing need for the categorisation to be as clear as possible as soon as possible, so that businesses can predict whether their systems will be heavily regulated or not regulated at all and adapt their planning for the coming years. This leads us to the issue of how risk is to be defined in the regulation and how the proposed risk levels are differentiated. This will be investigated through a closer look at each step of the risk pyramid in turn, starting at the top.

The prohibited AI systems with unacceptable risks are those that contravene the Union’s values, such as through violating fundamental rights.Footnote 26 The prohibitions (placing on the market, putting into service or use of an AI system) include practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness (so-called “dark patterns”) or to exploit vulnerable groups such as children in a way that is likely to cause physical or psychological harm. Social scoring by public authorities for general purposes through AI is also prohibited, as is the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement (with certain limited exceptions). These prohibitions are explained in the proposed Article 5 of the regulation. What is more concretely at risk at the top level of the AI pyramid? To name a few of the Fundamental Rights, Freedoms and Equalities of the EU Charter: Human dignity (Article 1), Right to the integrity of the person (Article 3), Respect for private and family life (Article 7), Protection of personal data (Article 8), Non-discrimination (Article 21) and The rights of the child (Article 24). Despite the fact that these central values may not be risked by AI systems, it may be noted that exceptions and limitations are generally possible when it comes to the EU Charter of Fundamental Rights – in line with the provisions of Article 52 of the Charter.

To draw a parallel with the imposing of strict liability for risky activities in tort law, the conclusion in the case of the proposed AI regulation is that the AI systems listed under “unacceptable risk” are considered more risky than, for example, the handling of electricity, railway transport and dynamite blasting (all commonly subject to strict liability). This comparison also illustrates that the world has changed from one in which physical risks were at the centre of attention to today’s increasingly abstract risks where intangible values such as dignity and privacy are the targets of protection. This development in itself is challenging for both legislators and citizens to understand and regulate. Another interesting fact is that these prohibited practices are listed in the proposed regulation. For comparison within the EU, the starting point of the GDPR is to list under what conditions personal data may be treated (not the opposite).Footnote 27 A question that arises is whether the technique to specify prohibited systems creates a risk for gaps in the regulation that might enable new or unspecified AI practices to operate, despite being potentially harmful.

At the next level of the pyramid, high-risk systems are regulated. The high-risk systems are the main focus of the proposed AI Act and are subject to the most articles (6–51). High-risk systems “create a high risk to the health and safety or fundamental rights of natural persons”.Footnote 28 The difference compared with the prohibited practices of the top of the pyramid is thus that the high-risk AI systems do not in themselves contravene Union values but “only” threaten them. In the balancing act here performed by the EU legislator, the risk that high-risk systems pose to the rights to human dignity, privacy, data protection and other values is weighed against the benefits of the systems. Resembling the construction of strict liability in tort law, the benefits motivate that the systems are permitted – but with restrictions. In order to be allowed on the European market, high-risk systems will be subject to an ex-ante conformity assessment, certain mandatory safety measures, market supervision and follow-up conformity assessments. These restrictions build the “lifecycle” of high-risk systems.Footnote 29 A product safety marking known from EU product safety law – the CEFootnote 30 marking – will show that high-risk AI systems are in conformity with the requirements of the regulation (Article 49) and approved by the competent public authority. On a larger scale, a database with approved high-risk AI systems will be created at the EU level (Article 60).

How is a system classified as high risk? According to the explanatory memorandum of the proposed act, this will depend on the purpose of the system in connection with existing product safety legislation.Footnote 31 Two main high-risk categories are: (1) AI systems used as safety components of products that are subject to third-party ex-ante conformity assessment (machines, medical devices, toys); and (2) standalone AI systems with mainly fundamental rights implications, explicitly listed in an annex to the proposed AI Act (education, employment, public services, border control, law enforcement and more). This list should be seen as dynamic and may be adjusted in line with developing technology.

Continuing down the pyramid, limited-risk AI systems can be found at the third level and are regulated in the proposed Article 52 of the AI Act. Such systems are permitted with only certain transparency obligations. The obligations will apply for systems that interact with humans, systems that are used to detect emotions or to determine association with (social) categories based on biometric data and systems that generate or manipulate content (“deep fakes”).Footnote 32 The motivation for the proposed article is that people must be informed when they are interacting with AI systems and know that their emotions or characteristics are recognised with automatic means, or that an image, video or audio content is AI generated. Through being informed, people are given the opportunity to make informed choices.Footnote 33 The risk in this scenario is that a person may be misled, namely being led to believe that they are interacting with another person when they are in fact interacting with a system – or perceiving content that they believe to be authentic when it is actually manipulated. Such situations can lead to a decline in trust amongst consumers of new technologies, which would be undesirable. The EU goal is to build trust in the AI area – to achieve a responsible, balanced regulation that encourages the use of AI and thus boosts innovation. Trust requires that the respect for fundamental rights is maintained throughout the Union.Footnote 34

At the bottom of the pyramid of criticality, minimal-risk AI systems (such as spam filters, computer games, chatbots and customer service systems) fall outside the scope of the regulation.Footnote 35 All AI systems not explicitly included in the tiers above are defined as minimal risk, which means that most AI systems used today will not be subject to EU rules. The fact that this layer of the pyramid is described as “minimal risk” and not “non-existent risk” appears reasonable, as risk is practically never completely avoidable, whether it comes to AI or other aspects of life. Despite the classification of “minimal risk”, Article 69 of the proposed AI Act suggests that codes of conduct should be developed for these systems. With time, control mechanisms such as human oversight (Article 14), transparency and documentation could thus spread from regulated AI services to minimal-risk systems.

To summarise this overview of the risk-based approach in the AI Act proposal, the high-risk category is the absolute focus of the regulation. One may even ask why the “minimal-risk” category is included in the pyramid of criticality. The focus of the following section will be on how this risk structure with its different tiers relates to the tort law issues introduced in Section II.

IV. Some tort law reflections on the risk-based approach

The proposed “pyramid of criticality” of the AI Act is interesting from several different perspectives. One central issue that it sparks from a tort law point of view and our earlier discussion on risk is: if a risk assessment is to be fair and requirements based on risk proportionate, is it at all possible to determine risk beforehand in fixed categories? The first challenge here is, as presented above regarding the tort law method for evaluating negligence, that risk is typically assessed in a specific situation. The outcome of a risk assessment will thus vary depending on every single circumstance of a given situation. A second challenge is that risk can be described as something highly subjective that differs from person to person. The question is thus if it is possible to harmonise these different perceptions of risk and capture a large amount of risks while still achieving a balanced regulation that allows for innovation and a proportionate use of AI systems.

The pragmatic answer to the queries above would be that it is not possible – not even in an insurance contract – to describe every risk that an AI system could pose, let alone to different people, as the risks are largely unknown today. Therefore, standardisation is necessary. A general provision built on negligence, such as “a person who causes harm or loss by using an AI system without sufficient care shall compensate the damages”, would open up for individual interpretations of “sufficient care” and various experimental and potentially harmful uses of AI systems – where actions would be assessed only after damage of an unknown extent had already occurred. Such a system is reasonable (and well-established) concerning pure accidents in tort law, such as when someone smashes a vase or stumbles over someone’s foot, while in the case of AI there is a known element in every given situation: the involvement of an AI system. Instead of using a traditional general negligence assessment, the AI systems have been deemed so risky that the use of them must be regulated beforehand.Footnote 36

To develop the reasoning on the connection between the pyramid of criticality and the risk assessment within fault-based liability, it should be emphasised that the existence of predetermined risk categories in the proposed AI regulation does not exclude the impact of specific elements of the risk assessment. Within every risk category, the benefits of the AI systems (economic such as efficiency, social such as faster distribution of services) have been weighed against the risks (to fundamental rights and freedoms, health and safety, vulnerable persons) that they typically entail. With regard to each category, this can be explained according to the following.

In the case of prohibited practices, the conclusion of the balancing act is that the risks are generally too high to permit the systems – despite the benefits they offer. Regarding high-risk systems, the benefits motivate risks of a certain level (with control mechanisms in place). When it comes to limited-risk systems, the risks become so moderate that it is enough to make users aware of them. This could open up a discussion on another tort law phenomenon mentioned above: assumption of risk.Footnote 37 Minimal-risk AI appears uncontroversial as it brings benefits without any identifiable risks. This said, such risks could of course still exist – or, perhaps more likely, emerge. Who can guarantee that video game data or spam filter data will never be used in illicit ways, such as to profile individuals? In order to serve their function, the risk categories with their delimitations must continually be monitored and adapted according to the developments of the field.

Continuing to the concept of strict liability in tort law, the similarities are many between traditional arguments for strict liability and the EU approach to high-risk AI systems. As mentioned earlier, strict liability is considered suitable in areas where serious risks are presented by societally necessary but potentially dangerous activities.Footnote 38 These considerations match the thoughts behind permitting high-risk AI systems with certain restrictions. The risks they bring are generally so high that such restrictions are motivated.

So, will strict liability be introduced for these systems in the EU? The issue of liability is not addressed in the AI Act but in other legal initiatives. According to the proposal for a revised directive on product liability, AI products are covered by the directive – meaning that damages may be awarded for material harm (including medically recognised psychological harm) caused by a defective AI-enabled good.Footnote 39 Both in this proposal and in the proposed directive on civil liability for AI, a central theme is the alleviation of the plaintiff’s burden of proof in cases where it is challenging to establish the causal link between damages and an AI system.Footnote 40 While these proposals are thus not primarily focused on AI risk categories, the topic of risk is more prominent in other current initiatives such as the EP resolution on civil liability.Footnote 41 The proposed AI Act refers to this resolution and a number of other resolutions that, together with the regulation, will form a “wider comprehensive package of measures that address problems posed by the development and use of AI”.Footnote 42 The resolution on civil liability will be the focus of the next section of this paper.

V. Risk and damages in the European Parliament resolution on civil liability

In short, the EP resolution suggests harmonising the legal frameworks of the Member States concerning civil liability claims and imposing strict liability for operators of high-risk AI systems. This ambitious vision will most certainly lead to a number of challenges. Not only is the concept of high-risk AI systems novel – the introduction of strict liability in a new area is always controversial and, what is more, the attitude to strict liability differs significantly throughout the EU.Footnote 43

To put these challenges into perspective, let us take a closer look at the content of the resolution.

The EP starts with some general remarks on the objectives of liability, the concept of strict liability, the balancing of interests between compensation of damages and encouraging innovation in the AI sector and the possibility for Member States to adjust their liability rules and adapt them to certain actors or activities.Footnote 44 It goes on to state that the issue of a civil liability regime for AI should be the subject of a broad public debate, taking all interests involved into consideration so that unjustified fears and misunderstandings of the new AI technologies amongst citizens can be avoided.Footnote 45 Furthermore, the complications of applying traditional tort law principles such as risk assessments and causality requirements to AI systems are addressed:

… certain AI-systems present significant legal challenges for the existing liability framework and could lead to situations in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system, or which code, input or data have ultimately caused the harmful operation … this factor could make it harder to identify the link between harm or damage and the behaviour causing it, with the result that victims might not receive adequate compensation.Footnote 46

As an answer to these challenges, the model of concentrating liability for AI systems to certain actors (those who “create, maintain or control the risk associated with the AI-system”) is motivated in paragraph 7 of the resolution. In paragraph 10, it is stated that the resolution will focus on operators of AI systems, and the following paragraphs go on to lay down the foundations for operational liability.

The EP concludes that different liability rules should apply for different risks. This seems to be in line with the tort law reflections presented above in relation to the pyramid of criticality of the proposed general regulation on AI. Based on the endangering of the general public brought by the autonomous high-risk AI systems and the legal challenges that are posed to the existing civil liability systems of the Member States, the EP suggests that a common strict liability regime be set up for high-risk AI systems (paragraph 14). The suggestion is quite radical, as it means deciding beforehand that all high-risk AI systems resemble such “dangerous activities” that, as described above in connection with the tort law assessments, they motivate strict liability.

The resolution continues by stating that a risk-based approach must be based on clear criteria and an appropriate definition of “high risk” so as to provide for legal certainty. Helpfully, such a definition is actually suggested in detail in paragraph 15:

… an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and goes beyond what can reasonably be expected … when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account … the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk causes harm or damage and the manner in which the AI-system is being used.

It is notable that this definition of risk builds on elements from established risk evaluations such as the negligence assessment in tort law described above, with its balancing of risk factors, potential harm and actors involved.Footnote 47 This could prove helpful when examples and arguments are needed for classification of AI systems. An immediate question concerns who is to conduct the risk assessment, as it may well be impacted by the assessor’s role as, for instance, operator or provider. Such conflicts of interests are supposedly to be avoided through a monitored system with notified bodies, according to Chapter 5 of the proposed AI regulation. The decisions of the notified bodies are to be appealable. Breaches of the requirements for high-risk AI systems (and any breaches relating to the prohibited unacceptable-risk AI systems) shall be subject to penalties including large administrative fines (Article 71). Thus, the preventative aspect forms an important part of the control system of the proposed AI regulation.

The EP suggests (paragraph 16) that all high-risk systems be listed in an Annex to the AI regulation, which should be reviewed every six months in order to capture technological developments. Comparing with product safety – an area where EU regulation has been in place for several decadesFootnote 48 and has served as inspiration for the suggested CE marking on the AI marketFootnote 49 – it is recommended (paragraph 17) that a high-risk assessment of an AI system is started at the same time as the product safety assessment. This implies that the risk assessment should not be stressed but may be complicated and time-consuming, which seems both realistic and reasonable considering the consequences of the risk classification.

AI systems that are not listed in the Annex should be subject to fault-based liability (paragraph 20), but with a presumption of fault on the operator. Such a construction would stress the responsibility of those in charge of AI systems with limited risk, and the knowledge of a lower threshold for compensation of damages could encourage consumers of limited-risk AI solutions. To balance this, it will still be possible for operators to exculpate themselves from fault-based liability by showing that they have fulfilled their duty of care. A similar construction, though based on strict liability, can be found in the GDPR’s Article 82 on damages for personal data breaches.Footnote 50 In conclusion, all AI systems that fall within the scope of the suggested AI regulation will thus carry stricter liability than the usual fault-based rule in tort law. Only the last tier, “minimal-risk AI systems”, will be subject to traditional negligence assessments. This is something that mirrors the attitude to AI systems as generally risky and in need of augmented control mechanisms, with pertaining solutions for compensation of losses.

The EP acknowledges that the launching of a common regime for strict liability is a major project at the European level.Footnote 51 In paragraph 19, some protected interests that should be covered by the planned regime are initially established:

… in line with strict liability systems of the Member States, the proposed Regulation should cover violations of the important legally protected rights to life, health, physical integrity and property, and should set out the amounts and extent of compensation, as well as the limitation period; … the proposed Regulation should also incorporate significant immaterial harm that results in a verifiable economic loss above a threshold harmonised in Union liability law, that balances the access to justice of affected persons and the interests of other involved persons.

The paragraph goes on to urge the Commission to re-evaluate and align the thresholds for damages in Union law (a separate question here is whether there are any such established thresholds) and analyse in depth “the legal traditions in all Member States and their existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm in AI-specific legislative acts is necessary and if it contradicts the existing Union legal framework or undermines the national law of the Member States”.

This wording recognises that the subject of immaterial harm is often sensitive in tort law. In Sweden, for instance, the development for immaterial damages has been reluctant over the years, with a careful expansion during the last few decades.Footnote 52 The main rule in Swedish tort law is still that economic harm is compensated without the victim having to rely on any particular rules, while immaterial harm is compensated only when a specific legal basis for the claim can be found.Footnote 53 Against this background, it is important that the possibilities throughout the EU to compensate immaterial harm resulting from the use of AI systems is scrutinised and, if necessary, enforced. Just like when it comes to harm resulting from breaches of data protection rules, immaterial harm could potentially become the most common form of harm caused by AI systems. Therefore, in similarity to Article 82 GDPR on damages, regulation of compensation for immaterial harm may well be motivated in the AI area.

VI. Joint responsibility and insuring strict liability for AI systems

Another separate liability issue mentioned in the introduction of the EP resolution is the probability that AI systems will often be combined with non-AI systems, such as human actions.Footnote 54 How should such interaction be evaluated from a tort law perspective? Some overarching measures are suggested by the EP:

… sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges and eliminate the risk of users being less willing to accept emerging technology; … fair compensation procedures mean that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system; … the user needs to be sure that potential damage caused by systems using AI is covered by adequate insurance and that there is a defined legal route for redress.

Thus, predictability is seen as key for the development of a liability regime for AI systems. Victims are to have the same level of protection and to receive fair compensation for damages with or without the involvement of an AI system (or with the involvement of both). It is important also to note that insurance will be required for operators of AI systems. This is common when strict liability is involved,Footnote 55 as the objective of making damage claims easier for victims will be forsaken if there is no actual money to be found at the tortfeasor’s end. Additionally, it is easier for businesses to insure against the risk of liability than it is for individuals to handle such risks. With the consumer collective paying more for the product or service, insurance is included. How will these insurance policies work, considering that the object of insurance is widely unknown? Even though this appears to be a relevant question from a tort law point of view, the concept of insuring the unknown is no novelty within the insurance industry, where the entire idea of insuring risks builds on fictive scenarios. In fact, it can be assumed that risks and liability connected to AI are already regulated in a variety of insurance clauses around the world.Footnote 56

The issue of insurance is addressed in paragraphs 23–25 of the EP resolution. Regarding the uncertainties of AI risks, the EP states that “uncertainty regarding risks should not make insurance premiums prohibitively high and thereby an obstacle to research and innovation” (paragraph 24), and that “the Commission should work closely with the insurance sector to see how data and innovative models can be used to create insurance policies that offer adequate coverage for an affordable price” (paragraph 25). These statements express an openness concerning the fact that insuring AI systems is a work in progress on the market and will need to be monitored and refined in the years to come.

VII. Conclusions

This article has shown that the existing and upcoming challenges are many for legislators, businesses and individuals in the area of AI, risk and liability. Some main themes explored above concern how risk is to be defined and delimited in a legal AI context, how liability for AI systems is to be constructed at the EU level and how different societal and individual interests can be balanced in the era of AI. For the time being, it would be unrealistic to provide solutions to these challenges. They are continuously changing and the proposed regulation is yet to be finalised, as are the potential rules on liability for AI systems.Footnote 57

In line with the dynamic status of the research field, the purpose of this paper has rather been to reach a better understanding of the difficulties connected to regulating AI and risk, using the traditional perspective of tort law. Within this legal discipline at least two established figures have been identified to help fit the AI pyramid of criticality into a more familiar legal frame: the negligence assessment within fault-based liability and the concept of strict liability.Footnote 58 Both of these tort law evaluations regarding liability encompass a variety of arguments and classifications concerning risk. The dense case law and theoretical works surrounding these figures provide us with important tools to achieve a proportionate balancing act in order to make the most of the many benefits of AI systems while safeguarding fundamental rights and ensuring compensation for those who suffer losses. As with every innovation in society, new risks arise. In this case they are especially unknown and with a much broader scope and larger potential impact than the specific areas where strict liability has been imposed before. Therefore, as demonstrated in this article, we should make the most of the legal tools we are familiar with – such as the established and often effective instrument known as tort law.

Acknowledgments

Thanks to the members of the WASP-HS project “AI and the Financial Markets: Accountability and Risk Management with Legal Tools” and the group members of the sister project “AI-based RegTech”, which is also WASP-HS-funded, for valuable input at an early writing stage. The author is also grateful to the anonymous reviewer for very useful comments.

Competing interests

The author declares none.

References

1 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM(2021) 206 final.

2 For an overview of the concept of risk in scientific research and specifically in a legal setting, see J Chamberlain and A Kotsios, “Defining Risk and Promoting Trust in AI Systems” in M Bergström and V Mitsilegas (eds), EU Law in the Digital Age, Swedish Studies in European Law (London, Hart/Bloomsbury Publishing, forthcoming 2023).

3 See, on a policy level, J Black and R Baldwin, “Really Responsive Risk-Based Regulation” (2010) 32 Law & Policy 181; P O’Malley, Risk, Uncertainty and Government (London, Taylor & Francis Group 2004).

4 See C Sunstein, Risk and Reason: Safety, Law, and the Environment (Cambridge, Cambridge University Press 2004).

5 See J Jarvis Thomson, Rights, Restitution, & Risk: Essays in Moral Theory (Cambridge, MA, Harvard University Press 1986).

6 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL).

7 Proposal for a Directive of the European Parliament and of the Council on liability for defective products, COM(2022) 495 final.

8 Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), COM(2022) 496 final.

9 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

10 See, in a Swedish context, B Bengtsson, Försäkringsavtalsrätt (Insurance Contract Law, Stockholm, Norstedts Juridik 2019) pp 26–28, where different types of clauses relating to risks are discussed. On the topic of risk control in English insurance law, see B Soyer, “Risk Control Clauses in Insurance Law: Law Reform and the Future” (2016) 75 Cambridge Law Journal 109.

11 See C van Dam, European Tort Law (Oxford, Oxford University Press 2013) pp 234–39 on the negligence balancing act, and pp 239–40 on risk assessment. A separate issue is negligence on the part of the victim; see ibid, p 241.

12 This weighing together of factors is a way of thinking that concerns substantive law but may simultaneously have parallels in other sciences, making the balancing act possible to “translate” to other fields such as the economic cost–benefit analysis. See, for instance, Sunstein, supra, note 4, pp 5–7, for a legal perspective on the cost–benefit analysis with a specific focus on the cost of regulation. For Sunstein, the cost–benefit balancing is primarily a cognitive tool and can help the legislator identify where there is real need for regulation and where it would be disproportionately intrusive. This is especially useful in times when certain risks are exaggerated and tend to overshadow more abstract risks that are actually more serious.

13 Two examples of the negligence assessment in Finnish case law are HD 1996:117 (fire incident involving minors) and HD 2011:107 (collision between a hunting dog and a car).

14 See, for an interesting discussion on negligence and the objectivity of the standard of care, JCP Goldberg and BC Zipursky, “The Strict Liability in Fault and the Fault in Strict Liability” (2016) 85 Fulham Law Review 743.

15 For an analysis of strict liability in Nordic tort law, with many examples from the Nordic countries (including the Swedish Law [2007:1150] on Supervision of Dogs and Cats), see T Wilhelmsson, “Strict Liability for Dangerous Activities in Nordic Tort Law – An Adequate Answer to Late Modern Uncertainty?” (2019) 16 Otago Law Review 219. For an overview and discussion on the European level regarding strict liability and new technologies, see European Commission, Liability for Artificial Intelligence and other Emerging Technologies, pp 25–27.

16 See European Commission, Liability for Artificial Intelligence and other Emerging Technologies, p 26.

17 Moreover, control over dangerous objects/enterprises must be possible. It is notable that the strict liability for dogs is not paralleled by such responsibility for cats. Dogs can cause more damage than cats, and it is possible to discipline and control them to a much larger extent than cats (if control over cats is even possible).

18 For a critical discussion on cost allocation in tort law, see G Calabresi, The Costs of Accidents – A Legal and Economic Analysis (New Haven, CT, Yale University Press 1970).

19 The issues are close to Sunstein’s arguments concerning the cost of regulation, mentioned above. Sunstein concludes that the legislator, in order to make a reasonable decision to proceed with regulation in a certain area, must have all cards on the table and thus have knowledge of all circumstances. In this situation, the cost–benefit analysis can help pinpoint all of the relevant facts (Sunstein, supra, note 4, p 7).

20 See further on the topic of assumption of risk and consent van Dam, supra, note 11, pp 256–58.

21 See, for instance, Nytt Juridiskt Arkiv 2018, p 591 (“cross-checking”, violence during a game of ice hockey), para 8; 18.

22 See the US Algorithmic Accountability Act and the Canadian Directive on Automated Decision-Making, both adopted in 2019, and a suggestion for a similar tiered approach in Germany in 2019, Opinion of the German Data Ethics Commission, pp 173–82. It should be emphasised that risk structures have figured earlier in EU legislation, such as that regarding the food industry and the handling of chemicals. These specialised risk regulations will not be discussed further within this article.

23 See E Gaumond, “Artificial Intelligence Act: What Is The European Approach for AI?” (Lawfare, 2021) <https://www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai> (last accessed 7 November 2022).

24 For an illustration of the pyramid, see EU Commission, “Regulatory framework proposal on artificial intelligence” <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai> (last accessed 7 November 2022). Regarding the term “pyramid of criticality”, see M Kop, “EU Artificial Intelligence Act: The European Approach to AI”, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No 2/2021.

25 EU Commission, supra, note 24.

26 Explanatory memorandum of the regulation, 5.2.2.

27 However, it should be noted that the “blacklisting” technique can be found in existing EU regulation, such as in the Unfair Commercial Practices Directive (2005/29/EC).

28 Explanatory memorandum of the regulation, 5.2.3.

29 See Explanatory memorandum of the regulation, 5.2.3, where the different steps and mandatory requirements are explained. These include data and data governance, documentation and registration, transparency, traceability, human oversight, robustness and security.

30 Conformité Européenne (European conformity).

31 Explanatory memorandum of the regulation, 5.2.3.

32 Explanatory memorandum of the regulation, 5.2.4.

33 There are some proposed exceptions in Art 52: the obligations on information and transparency do not apply when the use of an AI system is authorised by law to detect, prevent, investigate and prosecute criminal offences or when it is necessary in order to exercise the right to freedom of expression and the right to freedom of the arts and sciences of the EU Charter of Fundamental Rights.

34 These standings appear to be well-grounded. A survey of the population’s attitude to AI in Sweden from 2021 shows that curiosity and knowledge regarding AI technology are on the rise, in combination with a significant apprehension of the risks to privacy and personal data. See report “Svenska folket och AI” (“The Swedish Population and AI”) pp 34–40.

35 It should be noted that not all systems described as AI build on AI solutions; in many cases, simpler algorithms perform the function that companies describe as AI. It will be interesting to see how this rhetoric stands with the introduction of stricter requirements for AI (but not for “simpler” solutions).

36 A potential concern with this black-and-white approach is if there is a danger that “low-risk” systems – left unsupervised and without obligations – will in fact come to include practices that do infringe rights and create significant risks. A connected issue is how the new human oversight is to work in practice and who will be competent to carry it out.

37 One issue here is that a person should know what they consent to when it comes to risk. There is a high probability that not everyone understands what a “deep fake” is or what difference it makes that they are communicating with a person versus an AI, even though they are being informed about it.

38 See European Commission, Liability for Artificial Intelligence and other Emerging Technologies, pp 25–27.

39 See the Explanatory memorandum of the proposal, 1.3, and proposed Arts 4 and 5.

40 See Art 9 of the proposed revision of the Product Liability Directive; Explanatory Memorandum and Art 1 of the proposed AI Liability Directive (it may, however, be noted that a reference to the high-risk category of the AI Act is included in the list of definitions in Art 2). For interesting suggestions on adapting the product liability regime to new technologies and ensuring a fair distribution of risks, see A Bertolini, Artificial Intelligence and Civil Liability (Luxembourg, Publications Office of the European Union 2020), especially p 60 onwards.

41 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL).

42 Explanatory memorandum of the regulation, 1.1; 1.3.

43 European Commission, Liability for Artificial Intelligence and other Emerging Technologies, p 26.

44 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL), paras A–C.

45 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL), para E.

46 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL), para H. The arguments are familiar from the discussions leading up to and motives behind the Product Liability Directive (85/374/EEC), see especially the Directive Recitals.

47 See Section II above.

48 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety.

49 For a critical discussion of the product safety background in the AI context, see M Veale and FZ Borgesius, “Demystifying the Draft EU Artificial Intelligence Act” (2021) 22 Computer Law Review International 97, especially pp 102–05.

50 For a comprehensive analysis of the risk-based characteristics of the GDPR, see R Gellert, The Risk-Based Approach to Data Protection (Oxford, Oxford University Press 2020).

51 Again, this can be compared to the decade-long debate and negotiations leading up to the Product Liability Directive; see N Reich, “Product Safety and Product Liability – An Analysis of the EEC Council Directive of 25 July 1985 on the Approximation of the Laws, Regulations, and Administrative Provisions of the Member States Concerning Liability for Defective Products” (1986) 9 Journal of Consumer Policy 133.

52 See J Chamberlain, Integritet och skadestånd (Privacy Torts in Sweden, Uppsala, Iustus 2020) pp 168–77.

53 Such a provision is Chapter 2, § 3 of the Swedish Tort Liability Act (skadeståndslag, 1972:207).

54 European Parliament resolution of 20 October 2020 on a civil liability regime for artificial intelligence, 2020/2014(INL), para I.

55 European Commission, Liability for Artificial Intelligence and other Emerging Technologies, pp 26–27.

56 See on the topic of contracts and unknown risks, with the example of force majeure, F Ghodoosi, “Contracting Risks” (2022) 2022 University of Illinois Law Review 805.

57 For a comprehensive overview and analysis of the developments in the EU and Member States, see B Schütte, L Majewski and K Havu, “Damages Liability for Harm Caused by Artificial Intelligence – EU Law in Flux” (2021) Helsinki Legal Studies Research Paper No 69.

58 The issue of assumption of risk will probably become increasingly important as AI regulations and practices become established, but it seems of more of a periphery interest at this phase in development.