Skip to main content
Full name
Bruno de Oliveira Magalhães

The increasing integration of digital technologies into our daily lives, while bringing numerous benefits, has also opened up new forms of violence, particularly against women. The intersection between technology and gender-based violence requires legal frameworks to adapt in order to offer adequate protection. In this context, the new Brazilian law (Law No. 15.123/2025) represents a significant step by specifically addressing psychological violence committed using artificial intelligence. Understanding the details and implications of this legislation is crucial for both potential victims and society as a whole.

What Does the Law Establish?

Law No. 15.123/2025 establishes as a central point the increase in penalties for the crime of psychological violence against women when carried out with the use of artificial intelligence (AI) or any other technology that alters the image or voice of the victim. This legal provision demonstrates the legislator's recognition of the unique harm and potential scale of abuse facilitated by these technologies. The law focuses on the means of perpetrating psychological violence, indicating an understanding that technology can amplify the impact of this type of abuse. While traditional psychological violence may be limited by physical presence or direct communication, AI enables the widespread dissemination of harmful content and persistent harassment, requiring a more robust legal response.

Regarding the increase in penalty, the law stipulates that the prison sentence, which previously ranged from six months to two years and a fine, will be increased by half. This substantial increase signals the seriousness with which the legislator views this form of technologically enabled violence. The increased penalty aims to serve as a more effective deterrent and reflect the amplified harm caused by the use of AI in these crimes. A potential longer prison sentence may lead individuals to reconsider using AI to commit psychological violence. The severity of the punishment seeks to align with the potential for widespread and lasting psychological harm.

The scope of the law is noteworthy as it applies not only to artificial intelligence but also to "any other technology that has the capability to alter the image or voice of the victim." This broad scope ensures that the legislation remains relevant even with technological evolution beyond the current capabilities of AI. By not limiting to the specific AI technology but rather the harmful result of altering image or voice, the law becomes more resilient to future innovations that may be used for similar purposes.

The law is contextualized within the existing crime of psychological violence, specifically referring to emotional harm that undermines and disrupts full development or aims to degrade or control the actions, behaviors, beliefs, and decisions of the woman. Psychological violence can occur through threats, coercion, humiliation, manipulation, isolation, blackmail, ridicule, limiting the right to come and go, or any other means that harm the woman's psychological health and self-determination. The new law is based on pre-existing legal definitions of psychological violence, adapting them to the digital age by recognizing how technology can be an instrument for perpetrating these already known forms of abuse. The law does not create a new crime but elevates the punishment for an existing one when a specific technological element (AI or similar) is involved, leveraging the established legal understanding of psychological violence.

The use of deepfakes, which are AI-generated fake videos or images involving real women, is explicitly cited as one of the most current forms of this type of violence. These productions often spread false pornographic content and are used as a form of threat, coercion, humiliation, and blackmail. The explicit mention of deepfakes underscores the immediate trigger and main concern that drove this legislative change. High-profile cases and the growing sophistication of deepfakes likely played a significant role in the creation of this law.

Law No. 15.123, of 2025, results from PL 370/2024, authored by Deputy Jandira Feghali (PCdoB-RJ), and was reported in the Senate by Senator Daniella Ribeiro (PP-PB). The matter was approved during Women's Month, in March of this year, highlighting its importance for the female cause. The legislative process and timing of the law's approval indicate a political will to address this specific form of violence against women. The bipartisan support and approval during Women's Month suggest a broad consensus on the need for this legislation. President Lula, during the sanctioning ceremony, emphasized the psychological aspect of violence against women.

SectionProvisionImpact
Increased PenaltyIncreases the penalty for psychological violence against women committed with AI or similar technologies.Acts as a stronger deterrent and reflects the expanded harm.
Penalty IncreaseIncreases the prison sentence by half (from 6 months-2 years to 9 months-3 years).Provides more significant punishment for offenders.
Law's ScopeApplies to AI and any technology that alters image or voice.Ensures the law remains relevant with technological advancements.
Focus on Psychological ViolenceSpecifically targets emotional harm, degradation, or control facilitated by technology.Builds on existing legal definitions of psychological violence in the digital context.
Mention of DeepfakesExplicitly mentions deepfakes as a primary example of the type of targeted violence.Highlights the immediate concern and context for the creation of the law.

The Dark Side of AI: Violence Against Women in the Digital Age

To fully understand the scope of the new law, it is essential to understand the concept of Artificial Intelligence (AI) and how it can be maliciously used in the context of violence against women. In simple terms, Artificial Intelligence involves machines that learn and solve problems, mimicking human intelligence. We can compare it to giving a "brain" to a machine, capable of learning from data such as images, sounds, texts, and numbers. AI is already present in various areas of our daily lives, from virtual assistants to social media algorithms. Demystifying AI is crucial for the general public to understand how it can be misused in the context of violence. Many people may not fully understand what AI is and how it works, making it difficult to grasp the specific threats it represents. A simple explanation will increase understanding.

AI can be used for various forms of violence against women. A particularly concerning example is deepfakes. AI can create fake videos or images of individuals with realistic appearances, often superimposing their faces on bodies involved in sexual acts. These contents are often used to humiliate, blackmail, or threaten women, causing significant psychological distress and reputational damage. Deepfakes represent a particularly insidious form of violence as they can erode trust in visual evidence and cause profound emotional harm. Their credibility makes deepfakes incredibly harmful because victims may face public ridicule and disbelief. The non-consensual nature violates privacy and dignity.

Another form of abuse facilitated by AI is automated cyberstalking and invasive monitoring. AI can be used to automate the collection of personal data from social media and other online sources for stalking purposes. Additionally, AI can mimic social media interactions to gather information or spread misinformation about the victim. Stalkerware, a type of software that can track location, record audio, and access messages, can be enhanced by AI for more sophisticated and undetectable surveillance. AI can make cyberstalking more efficient, persistent, and difficult to detect, increasing the victim's fear and anxiety. Traditional cyberstalking may involve manual monitoring. AI can automate this process, allowing perpetrators to track multiple victims or collect large amounts of information more easily.

In addition to deepfakes and cyberstalking, AI can be used for other forms of manipulation and psychological abuse. For example, AI could be used to create highly convincing, personalized phishing attempts or scams targeting women for emotional manipulation or financial gain. AI-powered chatbots could be used for automated harassment or spreading hate messages. Although the excerpt does not explicitly focus on gender, the context of the law implies this application. There is also the possibility of AI being used to generate fake online profiles to deceive and manipulate women in online relationships. AI enables more sophisticated and personalized forms of psychological manipulation, making it harder for victims to recognize and defend against the abuse. Generic harassment or scams are often easier to identify. AI can tailor messages and interactions to exploit individual vulnerabilities, making the abuse more effective.

Legal and Technical Challenges: How to Punish AI Crimes?

Punishing crimes involving the use of artificial intelligence presents significant legal and technical challenges. One of the main obstacles lies in the difficulty of identifying and tracking the perpetrators of these crimes. Assigning the creation and dissemination of AI-generated content to a specific individual is complex, especially when anonymization technologies are used. Furthermore, the transnational nature of the internet makes it difficult to pursue perpetrators located in other jurisdictions. The anonymity and borderless nature of the internet, combined with the technical sophistication of AI, pose considerable obstacles for law enforcement authorities. Unlike traditional crimes, where physical presence or direct communication can provide leads, AI-related crimes can be committed remotely and leave fewer easily identifiable digital traces.

The collection and admission of digital evidence in online violence cases involving AI also present complexities. The fragile and easily alterable nature of digital evidence makes it susceptible to deterioration or manipulation. Specific technical expertise is required to properly collect, preserve, and analyze digital evidence related to AI-generated content. Validating the authenticity of digital evidence, such as deepfakes, in court is also challenging. Traditional methods of evidence collection and authentication may not be sufficient for AI-related crimes, requiring new approaches and specialized knowledge. Simple screenshots or recordings may not be enough to prove the origin or manipulation involved in AI-generated content. Forensic analysis by experts is often necessary.

The need for technical expertise for investigation and forensics is crucial. There is a shortage of sufficient technical knowledge among law enforcement authorities and legal professionals regarding AI and cybercrime. It is essential to invest in training and specialized resources to effectively investigate and prosecute these types of cases. Overcoming the technical expertise gap within the legal system is fundamental for the effective enforcement of this new law. Without proper training, investigators and prosecutors may struggle to understand the technical aspects of AI-related crimes, leading to unsuccessful investigations and prosecutions.

A Global Overview: Laws and Initiatives Against Online Gender Violence

The new Brazilian law does not arise in isolation on the global stage. Other laws and initiatives, both in Brazil and other countries, seek to combat gender-based violence facilitated by technology. In Brazil, the Maria da Penha Law (Law No. 11.340/2006) focuses on domestic and family violence against women. The new law complements the Maria da Penha Law by specifically addressing technologically facilitated psychological violence, which can occur inside or outside domestic relationships. Law No. 14.811/2024, which criminalizes cyberbullying and toughens penalties for some online crimes against minors, also demonstrates the growing legislative attention to online harm. The new law is part of a broader trend in Brazil to recognize and criminalize various forms of online violence, particularly those affecting vulnerable groups. The existence of laws against domestic violence and cyberbullying provides a foundation on which this new legislation can build, addressing a specific gap related to AI.

In other countries, similar efforts are noted. In the United States, the "Take It Down Act" aims to criminalize non-consensual deepfake pornography and requires platforms to promptly remove such content. The law has bipartisan support but also faces concerns about freedom of expression. The "NO FAKES Act" is a bipartisan bill that seeks to establish federal intellectual property rights over individuals' voice and image to combat unauthorized AI-generated deepfakes. Many U.S. states have also enacted laws addressing non-consensual sexual deepfakes and misleading election media, often amending existing "revenge porn" or child sexual abuse material laws. However, inconsistencies in definitions and scope among states exist.

In Europe, the EU Directive to combat violence against women and domestic violence sets minimum standards for the criminalization of online gender violence, including non-consensual image sharing, cyberstalking, cyberharassment, and online hate incitement. Specific country initiatives include France's law against revenge porn and Ireland's "Coco's Law," which addresses the distribution of intimate images without consent. The Council of Europe has also been engaged in combating cyberviolence against women and girls through conventions and recommendations.

International organizations also play a crucial role. The UN leads efforts to combat technology-facilitated gender-based violence through various initiatives and advocating for stronger laws and policies globally. The Global Digital Compact and the UN Cybercrime Convention are examples of these efforts. The World Bank has also conducted research highlighting the lack of comprehensive legal frameworks against cyberharassment globally. There is a growing global awareness and legislative effort to address online gender violence and the misuse of technologies like AI, with various approaches being adopted in different regions and countries. The different approaches suggest an evolving understanding of the problem and the best ways to address it.

Country/RegionLegislation/InitiativeFocus
United States (Federal)Take It Down ActCriminalizes non-consensual deepfake pornography, requires removal by platforms.
United States (Federal)NO FAKES Act (Proposed)Establishes federal intellectual property rights for voice and image to combat deepfakes.
United States (State Level)Various state laws (e.g., California AB 602, Texas SB 1361)Criminalization of non-consensual sexual deepfakes, misleading election media.
European UnionEU Directive to combat violence against women and domestic violenceCriminalizes online gender violence, including non-consensual image sharing, cyberstalking, etc.
FranceDigital Republic LawStricter penalties for revenge porn.
IrelandCoco's LawPenalties for distributing intimate images without consent.
Council of EuropeIstanbul Convention, RecommendationsFrameworks for combating cyberviolence against women and girls.
United NationsGlobal Digital Compact, UN Cybercrime ConventionInternational efforts to strengthen laws and cooperation against online violence.

Implications and the Future of Protection

The new Brazilian law has a potentially significant impact on preventing and punishing violence against women using artificial intelligence. The increased penalties may deter individuals from using AI to commit psychological violence due to the harsher punishments. Additionally, the law recognizes the specific harm caused by technologically facilitated abuse, which may lead to more effective prosecution and better support for victims. The law also has symbolic importance in recognizing and condemning this emerging form of violence. The law has the potential to be a significant advancement in the protection of women in the digital age, sending a clear message that this type of abuse will not be tolerated.

However, it is important to recognize that legal measures alone may not be sufficient to fully address the issue. Developing education and public awareness campaigns to inform people about the risks of AI misuse and the importance of online safety is crucial. Furthermore, it is essential to provide adequate support and resources for victims of technology-facilitated violence, including psychological counseling and legal assistance. The continued need for technical solutions and platform accountability to prevent the creation and dissemination of harmful AI-generated content is also evident. International cooperation is equally important to address the transnational nature of these crimes. A multifaceted approach involving legal measures, education, victim support, technical solutions, and international collaboration is necessary to effectively combat technology-facilitated violence against women.

Conclusion

Law No. 15.123/2025 represents an important milestone in addressing violence against women in the digital environment by increasing the penalty for psychological violence committed using artificial intelligence. This legislation acknowledges the severity and destructive potential of the misuse of technologies such as deepfakes and automated cyberstalking. Although the new law has a potentially significant impact on the prevention and punishment of these crimes, it is crucial to recognize that it is only one component of a broader strategy. The effective protection of women in the digital age will require ongoing efforts in education, awareness, victim support, and the pursuit of technical solutions that prevent the proliferation of online violence. Collaboration between lawmakers, law enforcement authorities, digital platforms, and civil society will be essential to ensure that the online environment is a safe and violence-free space for all women.

Add new comment

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.