A. Introduction
In May 2024, the European Union adopted landmark legislation, the first binding law (“the Directive”) aiming to harmonize legislative and policy responses to violence against women and domestic violence across all its Member States.Footnote 1 The Directive prohibits various forms of sexualized and gendered harms in both physical and digital spaces. Moreover, it seeks to enhance support and assistance for victims by building on the EU framework on victims’ rights, as well as addressing the specific needs arising from gender-based violence. Significant emphasis on prevention measures and advancing research into violence against women are also included. Ultimately, it aims to facilitate and enhance cooperation and collaboration among Member States, thereby fostering an environment of freedom, security, and justice.
While the adoption of the Directive marks a significant milestone in the EU commitment to combat violence against women within its borders, it comes after decades of calls for common action that have often gone unheard.Footnote 2 Not surprisingly, therefore, the adoption of the Directive has faced various hurdles and challenges related to legislative competence, terminology, and a range of normative considerations. Furthermore, the process is far from complete. Following its publication in the Official Journal of the European Union, Member States are tasked with transposing it into their national frameworks within the next three years, in accordance with Article 49. However, the nature of EU directives allows for Member States to exercise discretion in achieving its objectives, which may result in variations in implementation across countries, though it does also present an opportunity to enhance protections and support.
Against this background, this Article offers the first comprehensive critical analysis of the Directive which focuses specifically on the regulation of image-based sexual abuse (“IBSA”). This encompasses “all forms of the non-consensual creating, taking or sharing of intimate images or videos, including altered or manipulated media, and threats to distribute such material.”Footnote 3 This is an area of gender-based violence that is rising exponentially and causes significant harm, yet it is often trivialized, with only piecemeal legal responses. Moreover, when considered alongside Regulation (EU) 2022/2065 (the “Digital Service Act” or “DSA”) and Regulation (EU) 2024/1689 (the “AI Act”), there is the potential opportunity for a comprehensive approach to addressing IBSA across the EU. This approach is beneficial but should be implemented in a manner that embeds liberty and autonomy, along with safety, for women and people belonging to historically oppressed groups in cyberspace. Accordingly, this Article offers an in-depth legal analysis of the current EU legal and policy response to IBSA, emphasizing its goals and shortcomings. It aims to guide national-level implementation, addressing existing gaps and enhancing legal protections.
The Article is structured in five Sections. Following this introduction, we review the concept of IBSA and its current legislative landscape across the Union. Subsequently, we provide an overview of the journey leading to the current text of the Directive and critically analyze it, highlighting key amendments made during the legislative and policy-making process. Further, we contextualize the regulation of IBSA within the broader framework of the Digital Services Act and the AI Act, assessing if and to what degree these complementary pieces of legislation could fill existing gaps and could contribute to the overall effectiveness of the EU framework in combatting this issue. In conclusion, the Directive, the Digital Services Act, and the AI Act mark an initial effort to consistently respond to IBSA at the EU level. However, they do not yet provide a comprehensive solution, as they fall short in capturing its full scope and the diverse experiences of its victims. Effective transposition and implementation at the national level, beyond the minimum required, will therefore be essential to address these gaps and enhance the protection offered by this legislation.
B. Image-Based Sexual Abuse: A Sexualized and Gendered Harm Needing Common Action Across the EU
The term “image-based sexual abuse” refers to all forms of the non-consensual creating, taking or sharing of intimate images or videos, including altered or manipulated media, and threats to distribute them.Footnote 4 It was coined in response to the prevalent use of “revenge pornography,” which is both victim-blaming and misleading.Footnote 5 By its nature, “revenge pornography” implies that victims provoked retaliatory actions, overlooking the gender-based power dynamics often present in such situations. Similarly, these harmful acts often go beyond the pornographic narrative, being produced for purposes beyond sexual arousal and gratification.Footnote 6 Furthermore, it fails to capture the full spectrum of the issue, focusing solely on the malicious distribution of intimate images by ex-partners without consent. Instead, “image-based sexual abuse” offers a more comprehensive understanding.Footnote 7 It encompasses not only the distribution but also the non-consensual taking of intimate images, such as recording individuals without their knowledge while toileting or changing, which may involve the use of hidden cameras in public places.Footnote 8 Additionally, it recognizes the significant impact of threats to distribute such material on victims.Footnote 9 Finally, it covers the increasing creation of intimate content, including the use of AI to generate manipulated images or videos, commonly known as “deepfake porn.”Footnote 10
This latter term emerged in 2017 to describe sexually explicit content that superimposed women’s images into pornography, initially of celebrities.Footnote 11 With advancements in generative AI, neural networks trained on extensive datasets can easily create manipulated media, replicating real individuals and generating fictional ones.Footnote 12 However, a universally agreed-upon definition has yet to be established.Footnote 13 As this form of abuse has only become more prevalent in recent years, there is less research on the issue compared to other forms of IBSA. However, it is clear that deepfake technology is highly accessible and predominantly used for intimate and sexualized content. Studies indicate that over 90% of deepfakes fall into this category.Footnote 14 Nudification apps, designed to undress people in photos, are gaining popularity, with increased accessibility and accuracy requiring less data input than previous technologies.Footnote 15 Recent advancements, such as OpenAI’s “Sora” model that generates video from text input,Footnote 16 are likely to further exacerbate this issue. Regardless, sexually explicit deepfakes already cause victims to suffer a range of emotional, psychological, professional, and relational adverse effects.Footnote 17 These impacts often persist long after the initial abuse and extend to society,Footnote 18 mirroring the negative consequences of IBSA. Altogether, the creation of sexually explicit deepfakes should be considered another serious form of sexualized and gendered harm, requiring more robust legislative and policy responses. To date, the main challenge lies in effectively overseeing and enforcing existing regulations, as well as making necessary adjustments to address this evolving threat.Footnote 19
Beyond these nuances in terminology and scope, three characteristics define the nature of IBSA. First, it is gender-based, predominantly targeting women due to their gender or disproportionately affecting them. This phenomenon is deeply intertwined with the historical subordination of women and systemic violence against them in society.Footnote 20 Secondly, IBSA is part of a continuum of sexualized and gendered harms, spanning both online and offline realms. This understanding aligns with a theory formulated by Liz Kelly in the late 1980s, which highlighted a widespread pattern of sexual violence against women which spans everyday sexism, rape, and many other forms of abuse.Footnote 21 At present, technological advancements have amplified and interconnected these experiences, bridging the gap between online and offline experiences of abuse.Footnote 22 Third, in the late 1980s, Kimberlé Crenshaw introduced the term “intersectionality” to illustrate a connection between gender, race, and other personal characteristics or social systems that collectively contribute to subordination and oppression in private and public spheres.Footnote 23 This framework is likewise evident in cases of IBSA, where individuals occupying public positions, younger women, LGBTQIA* people, or members of historically oppressed groups including people from ethnic and religious minority communities are disproportionately targeted and harmed.Footnote 24
There is a growing body of quantitative and qualitative data on IBSA generated by various stakeholders, including civil-based (“CSOs”) and non-governmental organizations (“NGOs”), international and national bodies, as well as scholars, employing diverse methodologies.Footnote 25 This encompasses survey data collected through household and online surveys, as well as qualitative data derived from interviews and focus group discussions.Footnote 26 While differences in scope and methodologies sometimes hinder obtaining a clear picture of the prevalence of this sexualized and gendered harm, it is demonstrated that IBSA is alarmingly common and has seen an increase, especially during the Covid-19 pandemic as people shifted their lives online.Footnote 27 However, there is significant underreporting of violence against women, a trend that is reflected in online and technology-facilitated violence, including IBSA.Footnote 28 Numerous factors contribute to this underreporting, including victims’ lack of awareness of their victimization, fear of being blamed, and the inadequate sensitivity and responsiveness of law enforcement agencies (“LEAs”) and legal professional, which often downplay victims’ experiences and provide insufficient support, such as recommending privacy setting improvements.Footnote 29
Another area of research discusses the severe harm caused by IBSA, both to individual victims and society as a whole. Studies consistently demonstrate that many victims experience a profound social rupture, dividing their lives into distinct periods before and after the abuse.Footnote 30 The harm they endure extends across various dimensions, encompassing physical and psychological well-being, economic setbacks such as work absences and financial burdens related to seeking assistance and support, and social isolation stemming from victim-blaming responses and a general sense of distrust.Footnote 31 Moreover, the prevalent response of victims to censor themselves and withdraw from online spaces exacerbates the harm inflicted on society.Footnote 32 This withdrawal diminishes diversity and freedom of expression in cyberspace and risks broadening the existing digital divide between women and men.Footnote 33 At the same time, the response to IBSA is likely to carry significant socio-economic costs for society, including the loss of economic productivity and the incurring of health-related expenses.Footnote 34 On this point, a European Parliamentary Research Service study has recently quantified the cost of online and technology-facilitated violence against women to be in the order of €49.0 to €89.3 billion.Footnote 35
The harms caused by IBSA can be legally qualified as violations of the victims’ fundamental rights. IBSA is widely recognised as an infringement on human dignity by deliberately eroding a person’s self-worth and failing to afford them respect. This notion of dignity extends beyond the individual, intersecting with broader issues of gender equality and other social oppressions.Footnote 36 Scholars further highlight specific rights violations, which may vary by legal system. In European scholarship, for instance, IBSA is considered a serious violation of sexual autonomy, stripping individuals of control over their decisions to engage in or refrain from sexual activities such as the taking, creating, and sharing of intimate materials.Footnote 37 In contrast, United States literature focuses on how IBSA violates sexual and intimate privacy, infringing upon both the physical and digital boundaries that protect one’s body, health, sexual orientation, gender identity, private thoughts, and close relationships.Footnote 38 By restricting freedoms of expression and association of those targeted, IBSA creates a general climate of fear, shame, and censorship, where victims may feel unable to express themselves or participate in digital spaces without the looming threat of further abuse and exploitation. This chilling effect is particularly troubling, as it hinders engagement in cyberspace, which has often provided essential platforms for women’s rights groups and feminist voices to organize and advocate.Footnote 39
Additionally, it is important to examine “cyberflashing,” which refers to the digital distribution of genital images to another person without their consent.Footnote 40 Though not falling strictly within the definition of IBSA mentioned above, cyberflashing shares many similarities and is increasingly considered an instance of IBSA.Footnote 41 Cyberflashing is a common experience, with women—particularly young ones—disproportionately facing the highest rates of victimization and reporting significant negative impacts on their psychological and social well-being.Footnote 42 Beyond its pervasive harm, cyberflashing is a violation of sexual autonomy, experienced as a sexual violation and intrusion.Footnote 43 Furthermore, it is often socially trivialized and subject to victim-blaming attitudes.Footnote 44
While the literature on national frameworks concerning IBSA remains limited, particularly from a comparative perspective, it is evident that the current landscape is characterized by divergence, fragmentation, and complexity.Footnote 45 Despite variations, certain common themes regarding the limitations of these frameworks are consistent, particularly their lack of adaptability to future challenges and failure to reflect the experiences of victims.Footnote 46 One notable area of divergence lies in the various approaches to defining the nature of images, which span sexual, private, and intimate situations,Footnote 47 often failing to capture the intrusive experiences of minority groups that may not align with the “public morals” and “personal experiences” prevalent in Western countries.Footnote 48 Furthermore, the prevalence and accessibility of AI systems capable of altering and manipulating images are rarely addressed.Footnote 49 Similarly, there is a failure in scope regarding the criminalization of threats of distribution, despite the well-documented harmful and paralyzing effects such conduct can have.Footnote 50 Several national provisions also require either proof of the harm caused to the victim or of the underlying motivation of the offender. These additional requirements place a heavy burden of proof on the victim and increase the risk of victim-blaming attitudes within the courtroom.Footnote 51 Similar disparities and shortcomings are likely to be found in the regulation of cyberflashing. Although the literature is sparse, it appears that while a few Member States have prohibited it, most broaden the scope of other crimes to address it, often with significant limitations in their application.Footnote 52
C. Shaping Policy: The Development of the Directive to Regulate Image-Based Sexual Abuse
Following the European elections in 2019, gender equality emerged as a prominent issue on the political agenda, supported notably by Ursula von der Leyen, the first female President of the Commission, and the specific appointment of a Commissioner for Equality.Footnote 53 The inaugural address of Ursula von der Leyen to the European Parliament emphasized a strong commitment to prioritizing gender equality within her agenda.Footnote 54 Shortly thereafter, the Commission released its “Gender Equality Strategy” aiming to foster a Union where individuals of all genders and backgrounds have the freedom to pursue their aspirations, with equal opportunities to thrive and actively participate in shaping the European society.Footnote 55
The Gender Equality Strategy regards gender-based violence as a pervasive manifestation of gender inequality, representing one of society’s most pressing challenges. In response, the European Commission pledged to take comprehensive action to prevent and address gender-based violence, provide support and assistance to victims, and ensure accountability for perpetrators throughout its mandate. This commitment included finalizing the EU accession to the Council of Europe Convention on preventing and combating violence against women and domestic violence (the “Istanbul Convention”). In cases where significant obstacles arise, the Commission also proposed the adoption of specific EU measures within its competence to achieve the objectives outlined in the Istanbul Convention.Footnote 56
I. The EU Accession to the Istanbul Convention
The EU accession to the Istanbul Convention had long been a matter of debate. The Convention, adopted eight years earlier, outlines the potential for EU accession in Articles 75 and 76,Footnote 57 a move that the European Parliament consistently supported through numerous resolutionsFootnote 58 and the Commission issuing a roadmap in 2015 for EU accession.Footnote 59 Following a number of initiatives, in 2017 the EU signed the Convention, but there was considerable uncertainty around the legal basis, and the Council was reluctant to proceed with the ratification in the absence of a common accord among Member States, blocking the process for several years.Footnote 60
Consequently, in 2019, the European Parliament sought an opinion from the European Court of Justice (“ECJ”) to clarify the appropriate legal basis and therefore the scope of EU accession and the ratification procedure.Footnote 61 The ECJ delivered its Opinion 1/19 in October 2021, determining that the appropriate legal basis is Articles 78(2), 82(2), 84 and 336 of the Treaty of the Functioning of the European Union (“TFEU”), enabling the Council to adopt the Convention with qualified majority, without having to wait for agreement across the Member States.Footnote 62 Indeed, a number expressed opposition to the ratification, objecting to the inclusion of the term “gender” and reflecting victim-blaming attitudes, gender stereotypes, and resistance to same-sex rights and sexual education in schools.Footnote 63
Accordingly, in 2023 the EU acceded to the Convention on those matters falling under its exclusive competence.Footnote 64 This is a significant milestone, not only for its normative and symbolic implications but also because the Istanbul Convention now constitutes an integral part of EU law, serving as a legal source.Footnote 65 While there has been criticism of the limited scope of EU obligations arising from accession, due to its limited jurisdiction in criminal law,Footnote 66 this limitation does not extend to several other areas, such as victim assistance and support. Nevertheless, it is worth noting for present purposes that both non-consensual sharing of images or videos and their non-consensual taking, producing, or procuring are considered to fall under the purview of Article 40 of the Istanbul Convention on sexual harassment, based on the interpretation given by the Group of Experts on Action against Violence against Women and Domestic Violence, commonly known as “GREVIO.”Footnote 67 While this provision does not yet provide a complete solution for addressing all forms of IBSA, it represents a first step upon which the EU and its Member States can build.Footnote 68 Furthermore, given the comprehensive nature of the Istanbul Convention, the fight against IBSA could benefit from the several measures concerning prevention, prosecution, and coordination.
II. The EU Directive on Violence Against Women and Domestic Violence
While negotiations were on-going regarding the EU’s accession to the Istanbul Convention, in 2022, and in line with its objectives, the Commission published its proposal for the Directive to harmonize action regarding gender-based violence. The proposal aimed to criminalize rape based on lack of consent—as opposed to the current requirement of force or threats in several Member States, prohibit female genital mutilation (“FGM”), and address specific forms of cyber violence, including the non-consensual sharing of intimate and manipulated material, cyber-stalking, cyber harassment, cyber incitement to violence or hatred. Additionally, it sought to combat under-reporting of violence against women and domestic violence by implementing safer, more gender-sensitive reporting procedures and conducting individual risk assessments for victims. Ultimately, the proposal mandated Member States to offer dedicated services to meet the unique needs of sexual violence victims and enhance coordination and cooperation among Member States.Footnote 69
In relation to cyber violence particularly, the Explanatory Memorandum acknowledged the alarming rise of this form of abuse but that the Istanbul Convention does not explicitly address this issue.Footnote 70 This means that the regulation of such violence is often fragmented or entirely absent in Member States, leaving victims inadequately protected. In this regard, the Commission stressed that cyber violence is as prevalent and significant as physical forms of gender-based violence, often serving as a continuum of abuse that disproportionately affects women, particularly those engaged in public life.Footnote 71
As regards its competence, the proposal was based on judicial cooperation in criminal matters based on Article 82.2 TFEU, as well as sexual exploitation and computer crimes.Footnote 72 On this point, Article 83.1 TFEU provides the European Parliament and the Council with the competence to establish minimum rules and define criminal offenses “in the areas of particularly serious crime with a cross-border dimension resulting from the nature or impact of such offences or from a special need to combat them on a common basis.”Footnote 73 The Article enumerates several crimes that already satisfy meet these criteria, including “sexual exploitation” and “computer crimes,”Footnote 74 with IBSA falling within the latter and thus constituting a criminal offense intrinsically linked to information and communication technologies (“ICTs”).
The Commission’s proposal was subject to considerable objections from several Member States. Poland objected to the requirements for unanimity being bypassed due to the choice of legal basis. Meanwhile, the Czech Republic, Hungary, and Estonia criticized the interpretation of Article 83 TFEU on computer crimes, arguing that it should only cover offences exclusively committed through technology, which, for them, did not align with the cybercrimes outlined in the Directive.Footnote 75 However, the most significant and contentious issue relating to the proposal was the aim to establish an EU-wide definition of rape.Footnote 76 On the contrary, the European Parliament continued to support the provisions on rape, and proposed expanding the scope to encompass further offences such as sexual assault, intersex genital mutilation, forced sterilization, forced marriage, sexual harassment in the world of work, and the unsolicited receipt of sexually explicit material, more commonly known as cyberflashing.Footnote 77
Following several rounds of inter-institutional negotiations between the European Parliament and the Council, a political agreement was achieved in February 2024.Footnote 78 In summary, while the EU institutions agreed to remove rape from the list of crimes included, they did include an obligation for Member States to implement rape prevention measures and raise awareness on the key role of sexual consent. Other significant proposals were retained including the criminalization of cyberflashing and an extended list of aggravating circumstances, notably for crimes targeting public representatives, journalists, and human rights defenders. Additionally, the final agreement included the Council’s amendment stipulating that cyber stalking and harassment, along with the non-consensual sharing of intimate images online, should only be considered criminal offenses across the EU when such actions are likely to cause serious psychological harm or instill fear for the victims’ safety. Ultimately, the agreement stressed the need for intersectional support for victims, as advocated by the European Parliament. The final text was agreed published on the Official Journal of the European Union on May 24, 2024.Footnote 79
Throughout this policymaking and legislative process, the European Economic and Social Committee (“EESC”), as well as numerous CSOs, NGOs, and other stakeholders, voiced their opinions on both the initial proposal and subsequent versions of the Directive. Whilst their interests have been diverse—spanning from addressing the unique needs of specific groups of women to advocating for the inclusion of new crimes and the adoption of an intersectional approach—the primary focus has been on the harmonization of rape laws.Footnote 80 Nonetheless, attention has also been directed towards regulating cyber violence, including IBSA. Particularly, the EESC recommended that the absence of consent and public exposure alone should be grounds for categorizing actions as cyber harassment, advocating for the inclusion of cyberflashing.Footnote 81 In contrast, a joint civil society statement opposed this inclusion, citing concerns that it could potentially lead to unjust consequences for women, especially sex workers, who might be wrongly accused of sending unsolicited explicit materials that were requested. To mitigate this risk, alternative measures were proposed, such as establishing effective reporting mechanisms on online intermediary services and enhancing accountability in responding to user reports.Footnote 82 Similarly, there was a strong emphasis on the imperative for platform accountability to be gender-sensitive and responsive.Footnote 83 Overall, the regulation of IBSA and, in a broader context, online and technology-facilitated violence against women, was considered a significant milestone with the potential to bolster women’s fundamental rights and safety in cyberspace. However, it was widely acknowledged that the current wording of the Directive was narrow in scope and did not adequately reflect the experiences of victims.Footnote 84
D. Regulating Image-Based Sexual Abuse in the Directive on Violence Against Women and Domestic Violence
The Directive does not specifically address IBSA as a distinct category of gender-based violence. Rather, it touches upon certain aspects by establishing minimum standards for criminalization and harmonizing measures related to prevention, victim assistance and support, and prosecution. The following subsections will critically analyze this piecemeal approach, examining its potential effectiveness and identifying opportunities for Member States to exceed these minimum standards and strengthen legal protections for all victims of IBSA.
I. The Non-Consensual Distribution of Intimate or Manipulated Material: The Criminalization in Articles 5 and 7(c)
In summary, Article 5 of the Directive criminalizes the intentional distribution of “materials depicting sexually explicit activities,” and to a limited extent “intimate parts,” where the depicted person does not consent, and the material becomes public.Footnote 85 This provision is a positive acknowledgment of the prevalence and harms of this form of IBSA which, as noted above, has escalated in recent years. Its inclusion is therefore expected to strengthen legal protections that are often inadequate or non-existent in many Member States. However, despite its significance, this provision has received minimal attention compared to other parts of the Directive, and both the Parliament and the Council have largely disregarded calls for improvements to the original text of the article. Accordingly, this oversight risks perpetuating considerable limitations and represents a missed opportunity to advance women’s rights online as outlined below.
In more detail, Article 5 mandates Member States to criminalize three distinct types of conduct. The first element, Article 5(1)(a), covers the most well-known form of IBSA, namely the non-consensual distribution of intimate images. Nonetheless, there are specific limitations on this provision, with it only covering making such material accessible to the “public” by means of ICTs, the material must depict “sexually explicit activities or the intimate parts of a person” and it is only an offence where such conduct is “likely to cause serious harm” to the depicted individual.Footnote 86 The second provision, Article 5(1)(b), is a welcome recognition of the exponential growth in the use of AI and other technology to created intimate deepfakes.Footnote 87 This provision covers the production, manipulation or altering of material to make it appear as though a person is “engaged in sexually explicit activities” without that person’s consent and making the material accessible to the “public” by means of ICTs.Footnote 88 The third element, Article 5(1)(c), extends the scope to include threats to distribute the material covered in the first two forms of prohibited conduct where the threat is to “coerce a person to do, acquiesce to or refrain from a certain act.”Footnote 89
While it is welcomed that these forms IBSA are included in the Directive, there are several significant limitations in the scope of the measures. First, the exact nature of the material included in Article 5(1)(a) is unclear. The provision encompasses images and videos, as well as “similar material.” This is crucial for future-proofing the Directive, allowing it to potentially cover emerging technologies such as holograms or other visual media. For the time being, however, it raises questions about whether it includes non-visual material such as texts and audios, as mentioned in Recital 19. Although text and audio clips are often used in various forms of gender-based violence,Footnote 90 they typically fall outside the scope of most laws on IBSA. Thus, it seems likely that this measure is limited to material similar to imagery in type rather than in use.Footnote 91 Additionally, while the term “similar material” might appear broad, the scope is actually limited. It no longer covers “intimate” images but only “sexually explicit activities or the intimate parts of a person.”Footnote 92 This could exclude images of intimate behaviors such as changing clothes or using the toilet, which do not necessarily display the genital or intimate parts of an individual.Footnote 93 Furthermore, the reference to “intimate parts” is open to various interpretations, raising specific questions about its applicability to imagery considered intimate and/or sexual in some minority religious and ethnic communities.Footnote 94
Under letter (b), the scope of Article 5(1) relating to altered and deepfake imagery is even more limited. It includes only imagery making it appear as though a person is “engaged in sexually explicit activities.” This suggests participation and active engagement in sexual activities, such as conventional pornographic videos, but excluding images of nudity. This definition, therefore, excludes material produced through “nudification” apps and subsequently distributed without consent, as it may not depict someone actively engaging in sexual activities, even if the nude image itself is considered sexual.Footnote 95 This excludes a considerable range of non-consensually produced material that has been at the center of many cases of abuse.Footnote 96 These gaps in the original draft were identified but no action was taken to amend the Directive to ensure including the wide-range of ways in which this abuse is perpetrated.Footnote 97
Regarding the provision on threats under letter (c), Article 5(1) is not comprehensive, as it only covers instances where threats are made “to coerce another person to do, acquiesce, or refrain from a certain act.” This includes coercive situations such as “sextortion,” where a victim is threatened with the distribution of intimate images unless further material is shared.Footnote 98 It also covers blackmail scenarios where money is demanded to prevent distribution, a common form of extortion involving adult victims.Footnote 99 Additionally, it may apply to domestic abuse cases where a perpetrator threatens to distribute material as part of a broader pattern of control and abuse, potentially focusing on a “certain act” as required by the provision.Footnote 100 However, the prosecutorial challenge lies in identifying and proving the specific “certain act” connected to the threats. At the same time, Article 5(1) does not cover threats intended solely to cause distress to the victim. For instance, an ex-partner might threaten to distribute intimate images to deliberately cause distress, without coercing a particular act. Similarly, other perpetrators might make threats to exert power and control over the victim without it relating to a “certain act.” While including threats in the provision is a positive step, limiting it to specific threats falls short of its ambition and leaves significant gaps in protection, especially when likewise adding evidence thresholds.
Another significant limitation is that the provisions in Article 5(1) are restricted to the distribution of materials or the threat to distribute them. This means that the non-consensual creating or taking of intimate imagery is not included. This omission disregards the experiences of many victims, who often have material created or taken without their consent, in addition to it being distributed.Footnote 101 This narrow focus on the act of sharing jeopardizes victims’ access to legal redress and impinges on women’s sexual autonomy. It prevents them from safeguarding their personal boundaries and controlling the dissemination of their intimate depictions. In practice, the limitation stems from the legal basis for the Directive, which is the harmonization of cross-border computer crime based on Article 83 TFEU. Nonetheless, when implementing the Directive, it would be a significant positive step for Member States to ensure a comprehensive legal framework that includes the non-consensual creation, taking, and sharing of intimate material. Such an approach would offer more robust protection for victims and better uphold their fundamental rights and sexual autonomy.
The legal protection provided by Article 5 is further restricted by its application only to material made accessible to “the public.” Originally, the provision referred to “a multitude of end-users,” a change that sparked criticism from various CSOs and NGOs as unnecessarily limiting the scope of the measure and failing to understand the impact on victims of non-consensual distribution to small numbers of people.Footnote 102 The amendment to “the public” is preferable, as being more open to including the range of ways the abuse can be perpetrated. For example, it may be that the “public” could include the victim’s employer and colleagues who may not be many individuals but is still “public” distribution. On the other hand, distribution to the victim’s family, which may have catastrophic effects and life-threatening effects, may not be considered a “public” distribution. Recital 18 attempts to justify this limitation by explaining that ICTs amplify harm to the victim, and it clarifies that the criterion of making material accessible to the public should be understood as potentially reaching a significant number of individuals. However, this explanation fails to clearly define the boundaries of distribution, leaving the law ambiguous for victims, criminal justice personnel, and the public. This ambiguity is likely to result in considerable variation across Member States.
Article 5 continues to rely on the lack of consent from the victim, a principle reiterated in Recital 19, which specifies that whether the victim consented to the creation of the material or shared it with a specific individual is irrelevant. This aspect serves to protect victims from secondary distribution and slut-shaming attitudes that may arise from their initial participation in sexual conduct. Although Recitals are not binding, it is hoped that Member States will incorporate this consideration into their legal frameworks. Instead, Article 5 falls short of integrating the call for the specification of affirmative consent.Footnote 103
Furthermore, due to a Council amendment, Article 5 now restricts its scope to instances where the conduct in question is “likely to cause serious harm” to the victim.Footnote 104 While Recital 18 clarifies that the specific circumstances of each case should be considered, and the likelihood of causing serious harm can be inferred from objective factual circumstances, this clause risks requiring victims to give evidence regarding the effects of the abuse. This is a breach of a victim’s sexual autonomy and fails to understand that this abuse is wrong per se, and not only due to its potential adverse consequences.Footnote 105 Simultaneously, there is a real risk that the harms of this abuse are minimized, with there being difficulty proving “serious harm.”Footnote 106 This elevated threshold could discourage law enforcement agents from pursuing prosecutions where images are shared consensually, particularly in the context of sex work.Footnote 107
The Council added a final paragraph to Article 5, stating that its mandate for criminalization does not infringe upon the obligation to uphold the rights, freedoms, and principles outlined in Article 6 TEU. It also underscores that this criminalization is implemented without prejudice to fundamental principles related to freedom of expression and information, as well as the freedom of the arts and sciences, as delineated in Union or national law. This followed the objections of many Member States and of some civil society organizations to extending the law to cover forms of IBSA.Footnote 108 However, such a clarification is uniquely added in Article 5, raising questions about its necessity and the message it conveys. All EU legislation must conform to fundamental rights, making it redundant to include this specific provision. Moreover, this emphasis risks overshadowing the fact that IBSA itself infringes on fundamental rights, including the freedom of expression for women and girls,Footnote 109 which the Directive aims to protect. While balancing fundamental rights such as freedom of expression is essential, it must be emphasized that the Directive targets non-consensual conduct of a sexual nature. As long as the lack of consent is proven, there is no need to dilute its focus with unnecessary reassurances.
On a final note, it is important to note that Article 7(c) now criminalizes cyberflashing, defined as “the unsolicited sending, via ICT, of an image, video, or similar material depicting genitals to a person, where such conduct is likely to cause serious psychological harm to that person” and recognized it as another form of intimidating and silencing women based on Recital 24.Footnote 110 While this inclusion aligns with a Parliament amendment and addresses a common request amongst stakeholders,Footnote 111 its reliance on harm causation may still present some of the aforementioned challenges, by setting a high evidentiary threshold for prosecution. At the same time, it fails to recognize the violation of sexual autonomy and the experience of sexual intrusion cyberflashing generally involves.Footnote 112
II. The Non-Consensual Distribution of Intimate or Manipulate Material: A Holistic Approach to Its Response
Overall, the Directive presents an ambitious strategy to address violence against women and domestic violence comprehensively. In its chapters dedicated to victim protection and access to justice (Chapter 3), victim support (Chapter 4), prevention and early intervention (Chapter 5), and coordination and cooperation (Chapter 6), there is a clear attempt to prioritize the experiences and needs of victims, challenging gender stereotypes and victim-blaming attitudes both in and out of the courtroom. This is evident in Article 20, which safeguards the victim’s private life by allowing evidence of past sexual conduct or other intimate matters only when relevant and necessary. Similarly, Article 33 mandates specific support for victims facing intersectional discrimination, recognizing their heightened vulnerability to violence against women or domestic violence. Furthermore, the Directive acknowledges and addresses specific aspects of violence against women and domestic violence, including IBSA, as outlined below.
1. Strengthening Investigative and Prosecutorial Responses
The lack of knowledge and training amongst law enforcement agents and legal professionals presents one of the main obstacles to an adequate response to IBSA,Footnote 113 often characterized by social trivialization and victim-blaming attitudes. Accordingly, the Directive endeavors to provide solutions. Article 15 mandates Member States to ensure that those investigating and prosecuting acts of violence against women or domestic violence possess adequate expertise in these matters and have access to effective investigative tools. This includes the ability to gather, analyze, and secure electronic evidence, particularly in cases of IBSA pursuant to Articles 5 and 7(c). Additionally, Article 36.10 calls for the implementation of training activities tailored to cybercrimes, including IBSA, emphasizing the unique aspects of violence against women and domestic violence. In implementing these measures, transparency, educational review, and accountability are paramount to ensuring the effectiveness of consolidated expertise and training. Currently, there is limited scientific research on how police, legal, and judicial training influences behavior and enhances their responses to violence against women. Consequently, it is essential to bridge this gap by integrating scholarly knowledge and adult teaching skills to both observe and shape knowledge and training practices.
2. Enhancing Education and Social Awareness
The Directive acknowledges that there is a lack of social awareness about violence against women, including IBSA, that hinders victims being able to name their harmful experience and access to adequate assistance and support.Footnote 114 Consequently, Article 25.1(d) mandates Member States to provide specialist support to victims of IBSA, including on how to document the harm, and information on judicial remedies and the means to remove online content related to the crime. More broadly, amongst the preventive measures included in Article 34, paragraph 8 covers the development of digital literacy skills, including critical engagement with the digital world and critical thinking to enable users to identify and address cases of cyber violence, to seek support and to prevent its perpetration. This critical dimension is extremely relevant, as research indicates that social media activism has great potential in raising awareness and empowering women, particularly in addressing feelings of isolation and facilitating help-seeking behaviors.Footnote 115 However, this development of digital literacy skills should be coupled with initiatives aimed at fostering community awareness and advocating for sexual consent in cyberspace. This approach can enhance personal autonomy while mitigating the risk of moral policing. The emphasis should not be on criminalizing the exploration and expression of one’s sexuality online but on encouraging ethical use and consumption of technologies to facilitate and shape intimacy.Footnote 116
3. Strengthening Platform Cooperation and Regulation
Importantly, the Directive underscores the need for collaboration with social media platforms and search engines in combating IBSA, aligning with international consensus,Footnote 117 and establishing an explicit connection between the Directive and the Digital Services Act. While Article 34(8) advocates for multidisciplinary cooperation among relevant intermediary service providers and competent authorities to tackle IBSA, Article 42 promotes self-regulatory cooperation among these entities. This may involve the development of codes of conduct. Member States are likewise encouraged to raise awareness of such self-regulatory measures adopted by intermediary service providers, emphasizing their efforts to remove non-consensual material and enhance employee training to prevent and support victims of -IBSA-. However, there is concern regarding the use of the term “encourage” in Article 42. Without binding obligations, it is likely that intermediary service providers may not prioritize or fully implement the recommended measures, and their self-regulation is generally considered insufficient to provide genuine oversight, response, and accountability.Footnote 118
4. Removing Non-Consensual Imagery
Of particular significance is Article 23, which provides the removal of specific online content and intersects directly with the Digital Services Act.Footnote 119 Specifically, Member States must establish measures for the prompt removal or restriction of access to IBSA, authorizing competent authorities to issue legally binding orders to hosting service providers and compelling them to eliminate the content or block access to it. Compliance with these measures must adhere to the conditions outlined in Article 9(2) of Regulation (EU) 2022/2065, thereby ensuring their legality and efficacy by specifying aspects such as legal basis, territorial scope, language, and redress mechanisms. If the removal of content proves impracticable, authorities may refer to other intermediary service providers possessing the technical capability to restrict access. Furthermore, Article 23 underscores the importance of transparency and due process in the execution of these measures. They must be transparently executed, ensuring that they are proportionate and necessary while safeguarding the rights and interests of all parties involved. Simultaneously, hosting service providers affected by these measures have the right to pursue judicial remedies, while content providers must be informed of the reasons for content removal and their right to seek judicial redress.
However, a major limitation on these requirements regarding removal of non-consensual intimate imagery is that it appears that criminal prosecution and conviction may be a pre-requisite. Article 23(3) states that if criminal charges are terminated without a conviction, then the orders referred to are discharged. This will seriously limit the redress available to victims as so few prosecutions result in convictions, often due to the offence thresholds identified above. What is not clear is whether the orders can be applied where there is no criminal report or prosecution. This seems unlikely in view of the overall approach of these provisions. Accordingly, the support for victims is severely constrained and is lagging behind international best practice. Many jurisdictions now provide for civil orders to be issued to remove non-consensual material, including orders against perpetrators and platforms, as well as regulatory bodies that take-down material on behalf of victims.Footnote 120
E. Regulating Image-Based Sexual Abuse in the Digital Services Act and the AI Act
For a comprehensive analysis of the regulatory approach to IBSA at the EU level, it is necessary to examine the Directive within the broader framework of the DSA and the AI Act, which encompass provisions relevant to tackling IBSA.
I. The Digital Services Act
In 2022, the EU introduced the Digital Services Act as a key component of its digital strategy, finding a balance between fundamental rights protection and market growth.Footnote 121 The legislative rationale behind the DSA stems from the rapid transformation of the digital services landscape, where a few companies have emerged as dominant players, leveraging network effects and business models centered around continuous data processing of users. Social media platforms and search engines have transcended their traditional roles, evolving into influential public forums.Footnote 122 They serve as channels for information sharing, avenues for businesses to engage with customers, and platforms for political discourse. However, the intricate nature of these digital environments has also facilitated the proliferation of illegal content and goods, as well as online disinformation and manipulation, leading to significant political and societal implications,Footnote 123 including gender-based violence in cyberspace.Footnote 124 Consequently, the DSA targets online platforms and search engines, imposing various obligations based on their size, which allows supervising authorities to address their business models directly. The immediate focus here, however, lies on the provisions relating to IBSA and its response at the EU level.
The DSA acknowledges and conceptualizes IBSA in three aspects. Firstly, as previously mentioned, Article 9 regulates judicial and administrative orders to act against illegal content, contributing to the prosecution of IBSA as defined in Articles 5 and 7(c) of the Directive. Secondly, whilst Article 2(h) defines “illegal content” as “any information that … is not in compliance with Union law or the law of any Member State,” Recital 12 explicitly addresses “the unlawful non-consensual sharing of private images,” establishing a clear link with existing national legislation concerning some forms of IBSA.Footnote 125 This connection will be further strengthened by Article 5 of the Directive. Third, Recital 87 refers to “illegal pornographic content.”Footnote 126 While the definition of illegal pornographic content remains ambiguous due to variations in pornography laws across Member States and the lack of EU competence in this area, the recital provides an explicit example: “[C]ontent representing non-consensual sharing of intimate or manipulated material.”Footnote 127 Notably, this reference appears to extend beyond the scope of Recital 12, encompassing sexually explicit deepfakes. However, Recital 87 juxtaposes the non-consensual sharing of intimate materials with pornography rather than considering it as a form of sexualized and gendered harm. This parallels the said inadequacy of using the term “revenge pornography” or “deepfake porn” to depict IBSA due its victim-blaming and slut-shaming connotations.
In Chapter III, the DSA sets out due diligence obligations to ensure a transparent and safe online environment. For this purpose, Article 12 requires online platforms and search engines to designate a single point of contact for users to communicate directly and rapidly, using electronic means in a user-friendly manner. This provision is crucial, considering that victims of IBSA often encounter difficulties in contacting these actors to have their images removed. Moreover, Article 14(1) mandates that terms and conditions include information on policies, procedures, measures, and tools used for content moderation, including algorithmic decision-making and human review, as well as the rules of procedure for their internal complaint handling system. For victims of IBSA, this transparency is crucial, as it allows them to understand how online platforms and search engines are likely to handle their reports and what steps are taken to remove non-consensual intimate content. According to paragraph 4, content moderation should be carried out in a diligent, objective, and proportionate manner, with due regard to the rights and legitimate interests of all parties involved. Hopefully, this will allow to draw a clear line between IBSA and the consensual dissemination of intimate material, be it for personal or professional use. Simultaneously, Article 16 introduces general rules on mechanisms allowing anyone on the internet to signal potentially illegal content, known as “notice-and-action mechanisms.” Although this provision does not differentiate procedures based on content type, which may lead to disproportionate actions and confusion, it provides a means for victims and organizations representing their interests to signal non-consensual materials. This ensures that victims’ interests are protected, as once material is uploaded and distributed without consent, the harm is done, and the material likely spreads widely across the internet, making its removal extremely challenging and amplifying victimization.Footnote 128 Although this approach has faced criticism for creating a system of privatized content control over sexual conduct in cyberspace beyond judicial and democratic scrutiny,Footnote 129 one should emphasize that protecting the fundamental rights of privacy and freedom of expression for women and girls, as well as others predominantly affected by IBSA, requires proactive and swift responses from online platforms and search engines to remove potentially harmful material.
Ultimately, Article 22 establishes the role of trusted flaggers, whose notices of content removal should be prioritized by online platforms. Overall, criticism abounds regarding the effectiveness of trusted flaggers in realizing their intended goal of fostering decentralized, legitimate, and inclusive content moderation. Instead, their influence appears to predominantly bolster existing power dynamics or fortify the platforms themselves, reinforcing the perceived legitimacy of their content moderation methods.Footnote 130 Furthermore, an examination of the European Commission’s compilation of trusted flaggers under the DSA reveals a significant absence of designation for organizations dedicated to supporting victims online, raising concerns about inclusivity and support for those affected.Footnote 131
In Section 5, additional obligations are imposed on very large online platforms (“VLOPs”) and very large online search engines (“VLOSEs”), defined as those serving an average monthly active user base in the Union equal to or exceeding 45 million, and designated as such by the European Commission under Article 33. At the time of drafting, numerous online platforms known for hosting instances of IBSA fall within this category, including Facebook, Instagram, TikTok, Pornhub, Stripchat, and XVideos. However, a significant deficiency in this designation, particularly concerning the fight against IBSA, is the current exclusion of messaging apps such as WhatsApp and Telegram from the scope of the DSA’s platform component, despite calls from various stakeholders for their inclusion.Footnote 132 Nonetheless, according to Article 34, VLOPs and VLOSEs are mandated to conduct regular risk assessments, including evaluating the dissemination of illegal content through their services and assessing any actual or foreseeable negative impacts related to gender-based violence. Subsequently, Article 35 outlines specific mitigation measures, including the prompt removal of non-consensual material, distinguishing between deepfakes and authentic images, and fostering collaboration with other online providers. The latter measure holds particular significance, as non-consensual material often proliferates across multiple platforms, and victims may not be aware of its presence or possess the means to trace it independently.
The DSA proceeds with a series of due diligence obligations that, while not directly addressing IBSA, bear relevance to its mitigation. However, it is crucial to pause and examine the accountability framework established by the DSA, particularly in light of online platforms’ exploitation of IBSA for profit.Footnote 133 In essence, the DSA establishes a liability exemption regime, stipulating that providers can only be held liable for actions of which they have knowledge. This framework encourages self-restraint among providers, fostering an environment conducive to mutual benefit while upholding freedom of expression.Footnote 134 Notably, the DSA introduces novel regulatory expectations termed “due diligence obligations,” outlined in Chapter 3.Footnote 135 These obligations are distinct from legal immunities pertaining to third-party content. As emphasized in Recital 41 of the DSA, “[t]he due diligence obligations are independent from the question of liability of providers of intermediary services which need therefore to be assessed separately.”Footnote 136 Violations of these due diligence obligations trigger a separate enforcement mechanism outlined by the DSA, rather than subjecting providers to a deluge of individual claims. The primary aim of these obligations is to enhance the efficacy of systems and procedures used by online platforms for content moderation and overall risk management.Footnote 137 This means that the efficacy of this regime will depend on the proactive engagement of regulators and their willingness to challenge platforms.
On a final note, online platforms and search engines are obligated to adhere to the DSA regardless of their location within the EU, provided they offer intermediary services to users situated within the EU, pursuant to Article 2. At first sight, this provision may seem promising for effectively addressing IBSA, potentially fostering a harmonized legal framework. However, the responsibility for the enforcement of the DSA primarily rests with Member States, rather than with the European Commission, as stated in Article 56. This decentralized approach raises concerns regarding potential inconsistencies and fragmentation at the national level.Footnote 138 With regard to IBSA, such discrepancies are likely to intensify, given that while the Directive harmonizes its criminal definition, it is possible to anticipate varying degrees of discretion in the national transposition process.
II. The AI Act
In 2021, the European Commission published its proposal for the AI Act. Three years later, the European Parliament and the Council approved the final text, but it will take additional years before this legislation becomes enforceable at the national level.Footnote 139 According to Article 1, the AI Act aims “to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter of Fundamental Rights … and to support innovation.”Footnote 140 By doing so, the EU seeks to enhance its economic competitiveness and secure a stronger global position in AI.Footnote 141 It aims to distinguish itself from the commercially driven US AI policy and the government-dominated Chinese strategy,Footnote 142 while reasserting the “Brussels effect” through its market size, regulatory capacity, strict standards, stable consumer base, and cost efficiencies to establish global regulatory norms.Footnote 143 Consequently, the AI Act adopts a risk-based approach, categorizing AI systems into four levels based on their design, associated risks, and corresponding accountability obligations.
To some degree, the AI Act recognizes the potential risks AI systems pose concerning gender, noting in the Preamble the risk of fundamental rights violations and diversity bias that high-risk AI systems might cause. In response, Recital 165 highlights the importance of gender balance in development teams. According to Recital 27, gender equality is part of the “diversity, non-discrimination, and fairness” ethical principle, which, although not binding, is recommended as a guiding principle in AI development. However, despite a growing body of literature highlighting the capacity of AI systems to perpetrate gender-based violence,Footnote 144 the AI Act does not explicitly address this issue. Instead, Recital 136 generally recognizes the risks posed by the dissemination of artificially generated or manipulated content, with a strong emphasis on protecting democratic processes, civic discourse, and electoral integrity. However, as previously mentioned, AI systems are also increasingly used to alter or manipulate intimate images and videos without the consent of the person depicted.
In recognizing that certain AI systems intended to generate or manipulate content may pose specific risks of impersonation or deception, Chapter IV subjects their use to specific transparency obligations. In particular, Article 50.2 provides that natural persons should be notified when an AI system has generated or manipulated image, audio or video content that appreciably resembles existing persons and would falsely appear to a person to be authentic. They should also clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the AI output accordingly and disclosing its artificial origin at the latest at the time of the first exposure. This measure aligns with the current trend of developing solutions by design to help individuals maintain control over their own images.Footnote 145 Based on Article 50.4, the only exception is for deepfakes created as part of creative, satirical, artistic, or fictional work, thereby falling under freedom of expression. However, this emphasis on labelling of AI altered and deepfake material does little to assist in the reduction of harm caused by sexually explicit deepfakes. The harm in these situations is felt even though there is knowledge that the material has been altered, and the harm of the violation of autonomy and sexual integrity has already taken place. Therefore, for victims of sexually explicit deepfakes, the labelling of such content as deepfake/altered is not a solution or means of harm-reduction.
On a final note, Article 2 encompasses a broad scope of entities and activities related to AI systems within the EU and those impacting the Union from third countries. Specifically, it applies to any developer who introduces AI systems to the EU market, users of AI systems within the Union, as well as importers, distributors, and product manufacturers placing AI systems on the market or using them in conjunction with their products under their own name or trademark. Furthermore, authorized representatives of providers not based in the Union and affected individuals within the Union are included in its provisions. Potentially, this broad scope enhances accountability, particularly given the increasing availability of generative AI systems on most app stores, such as nudification apps.Footnote 146 For enforcement, the Commission is expected to adopt implementing acts on the application of provisions related to the labeling and detection of artificially generated or manipulated content, as well as to facilitate the adoption of codes of conduct at the Union level, pursuant to Chapter X. Additionally, Chapter VII establishes several boards and panels to contribute to the effective enforcement of the AI Act. However, as with the DSA, the primary responsibility lies with Member States, which are expected to establish effective, proportionate, and dissuasive penalties.
F. Conclusions
Image-based sexual abuse encompasses the non-consensual taking, creating, and disseminating of intimate materials, along with threats to distribute them. As a sexualized and gendered harm, IBSA is increasingly common and should be a central topic in today’s political discourse, especially in light of the new gender and technology strategies by the next European Commission. So far, the EU has demonstrated some commitment to addressing gender-based violence, including its online and technology-facilitated dimensions, specifically IBSA. In this regard, the EU has issued a Directive explicitly addressing the non-consensual sharing of intimate or manipulated material and cyberflashing. The Digital Services Act and the AI Act can also contribute to this policy response, by imposing specific obligations on online platforms, search engines, and AI developers.
Accordingly, this article has provided a comprehensive analysis of the new Directive and has discussed ancillary key provisions in the Digital Services and AI Act. While welcoming the focus on online and technology-facilitated violence against women, we identified many limitations of the current Directive, which will likely inhibit its effectiveness in challenging IBSA. Amongst others, these include the narrow scope of criminalized images and conduct, and the increased burden of proof on victims. Additionally, the Directive does not clearly address how to balance criminal responses to IBSA with freedom of expression, despite the fact that such conduct primarily restricts the freedom of expression of women and girls. Positively, the Directive adopts a holistic approach, including measures to support victims and improve education and training for the public and criminal justice personnel. While the Digital Services Act and the AI Act impose several obligations on online platforms, search engines, and AI developers—entities often channeling and profiting from online and technology-facilitated violence against women—their current formulations fail to reflect a comprehensive understanding of IBSA and the experiences of its victims. For meaningful impact, both the Commission and Member States need to proactively adapt these frameworks, taking a holistic approach that fully addresses the multifaceted nature of such sexualized and gendered harm.
Acknowledgements
The authors declare none.
Competing interests
The authors declare none.
Funding statement
No specific funding has been declared for this article.
Authorship Note
Authors formally agree on the sequence of authorship and agree to be responsible for the content of the publication, acknowledging that (1) Carlotta Rigotti designed the research and drafted the article in its entirety, (2) Clare McGlynn contributed to the critical analysis and reviewed the draft, and (3) Franziska Benning reviewed the draft, providing practice-inspired insights.