Skip to main content
SearchLoginLogin or Signup

Digital Gendered Violence in Chile: Development of a system for report, monitoring and response-orientation based on a feminist chatbot | Sofia

We update here the section of the book entitled "Incubating Feminist AI", which originally included the summary of the first three projects selected in the homonymous call for proposals and now offers the final versions of the seven LAC projects incubated so far.

Published onJul 10, 2023
Digital Gendered Violence in Chile: Development of a system for report, monitoring and response-orientation based on a feminist chatbot | Sofia


This paper systematized the research process for a first prototype of a monitoring, guidance and alert system for situations of digital gender-based violence in Chile. It proposes the development of a solution based on the development of a conversational agent (chatbot type) using feminist principles that consider ethical issues and put at the center the needs and context of women who experience this type of situation on a daily basis in their social networks. The methodology is based on User Research Experience (UX) and interviews with other stakeholders, in which key information was obtained with recommendations to develop a first proposal of what this system should look like. Finally, a proposal of the basic interaction flow for the design of the chatbot and the general considerations of the system's operation, as well as its projection, impact and next steps are provided.


We would like to thank / Queremos agradecer a

Valentina Prater and Camila Alcaíno for their support in the research project.

To the women of the Articulación Feminista Elena Caffarena network in Chile and others who allowed us to hear about their experiences and testimonies of confronting digital gender-based violence.

To the people interviewed for the study:

Fabiola Gutirrez (Corporación Humanas), Jessica Matus (Fundación Datos Protegidos), Mónica Maureira (Representative of the Government of Chile to the MESECVI-OAS), Constanza Figueroa (, Tomas Lawrence (Interpreta Lab), Maurizio Sovino (Public Ministry of Chile), Luis Orellana (Brigada de Cibercrimen, Policía de Investigaciones). The Fair+ Network for the funding opportunity for this research.


All forms of violence against women are part of a continuum within the patriarchal structure of domination in which we live. Digital Gender-Based Violence (DGV), a growing problem in an exponentially digitalized society such as ours, is an expression of this. Nowadays, digital spaces are an extension of our physical reality; therefore, the gender-based violence that women face daily in different spaces is also manifested through the use of technological media.

Despite being a recurrent problem, Chile does not have systematized data or statistics on situations of DGV. Besides, we do not have policies that allow us to measure, and respond to women who suffer these violations, generate a follow-up and systematize the different cases. This happens for the same reasons that occur in other countries: it has been a minimized, invisible and unattended issue from the authorities (political or police), although its appearance was considered more than fifteen years ago. On the other hand, there is also no institutional response to channel complaints and report cases when they occur in women's daily lives. In Latin America and other countries, it has been the feminist civil society organizations and feminist/cyberfeminism collectives the ones generating initiatives of support, response, guidance, and containment in these cases, which are not enough to address the quantity and variety of cases that occur.

Some questions are: can we find an answer or solution from AI-based technological solutions to address the complexity of the problems associated with DGV? What allows us to create and challenge the invitation to create, considering feminist principles and practice?

Artificial Intelligence (AI) can be a powerful tool to fight gendered violence since it can analyze large amounts of unstructured data and detect critical information. In particular, when dealing with free text, Natural Language Processing (NLP) allows computers and humans to interact through language. When dealing with DGV, it can collect complaints and give support based on available resources. This support can ease the allegation process and make more women ask for help.

It is important to mention that there are ethical and social questioning related to aspects such as data mining/extractivism, mishandling of privacy issues, or, finally, giving power to a machine (AI) in decisions that have consequences and impact on people's live. Therefore, we believe that this type of solution allows automating processes that are key to generating a database that can generate more accurate analysis on variables such as i) characteristics of these cases, ii) their recurrence, iii) identifying patterns and trends concerning who are aggressors or perpetrators profiles in social media platforms.

Along with this, we want to explore the possibilities that this type of development offers, to be able to give a first response in situations where there is no clarity of what to do or whom to turn to and when there are no other human or “face to face” mechanisms or systems for listening, reception and reporting.

In this context, this research paper seeks to design a solution as situated as possible, considering social contexts and personal situations that may be diverse in

relation to the group of women who experience them and that incorporates principles and a feminist practice for the processes of design and development of a prototype that allows addressing these issues from the perspective of women who have suffered different online violence and other related institutions, along with the review of national and international experiences that have sought to respond to DGV.

Objectives General Objective

Identify and systematize good practices and recommendations to propose a prototype of an AI conversational agent (chatbot) that allows receiving systematizing and gives a first orientation on situations of digital gender-based violence, based on social media platforms.

Specific objectives are:

  • Identify the basic and central conditions and characteristics that allow the development of a prototype of a feminist chatbot for attention to Chilean female users of social network platforms who are victims of digital harassment situations.

  • Establish guidelines to project the design and development of a prototype platform to alert and visualize data on situations of digital harassment and hate speech attacks against women.

Digital Gender Violence as a feminist issue and the context in Chile

Given the accelerated use of information technologies due to the COVID-19 pandemic and the digital migration of social spaces that this has brought about, online or digital gender-based violence has become a growing, highly complex, and priority issue.

Definitions of DGV

A key aspect is the understanding that situations of online or digital violence are not the same over time: it is not the same as at the end of the 1990s when the Internet began to be used daily. Therefore, its conceptualization and characterization are a dynamic phenomenon due to the evolution and progress of the technologies themselves and those related to their social use. Despite this, the characteristic implies any conduct, actions, or behavior involving attacks against women, girls, and adolescents.

This process of understanding has been key to the attention and definitions made by the United Nations Special Rapporteur on Violence against Women and Girls (2017), UN Women and other international organizations, considering that it is:

“Any act of gender-based violence against women that is committed, aided, or aggravated in whole or in part by the use of information and communication technologies, such as cell phones and smartphones, the Internet, social media platforms, or email, against a woman because she is a woman, or that affects women disproportionately” (UN Special Rapporteur, 2017).1

Recently, the Spotlight Initiative led by the consortium of organizations: Inter- American Commission of Women, the Follow-up Mechanism of the Convention of Belém do Pará (MESECVI), with support from UN Women, has established that it is.

“Any action or conduct against women, based on their gender, that causes death or physical, sexual or psychological, economic or symbolic harm or suffering, in any sphere of their lives, which is committed, instigated or aggravated, in whole or in part, with the assistance of information and communication technologies” (MESECVI-UN Women, 2022).2

Types of situations of DGV

According to various typologies proposed by academic studies and carried out by civil society, at least 10-12 different forms of online violence against women have been identified, from unauthorized access (intervention) to personal accounts or devices and access control, control, and manipulation of information, monitoring and stalking to the dissemination of personal or intimate information without consent (Peña, 2017, Datos Protegidos, 2018, MESECVI, 2022).

The following is a summary of the types of Digital Gender Violence systematized by different initiatives and research carried out in Latin America and Chile3:

1 Human Rights Council, United Nations, 2018 Report of the “Special Rapporteur on Violence against women, its causes and consequences on online violence against women and girls from a human rights perspective”.

2 MESECVI-UN Women 2022, Report on Cyber-violence and Cyber-stalking against Women and Girls, in the Framework of the Belem Do Para Convention.

3 References for the categorization at Table 1: MESECVI-UN Women (2022), Datos Protegidos (2018), and Peña, Paz (2017).

Table 1. Summary of Gender Digital Violence typologies-situations (cases)

DGV Typology/ classification

What is it about?

Disclosure of personal information or intimate without consent

Share or publish without consent any type of photos, videos, data, or private information that affects a person.


Collection and publication of private or identifying information about a person, generally for the purpose of intimidating, humiliating, or threatening.

Impersonation, usurpation and identity theft

Use and/or falsification of the identity of a person without consent. The theft or obtaining of information that may imply the loss of control over it and any attempt to modify it without consent. Creating fake or impersonating accounts.

DGV Typology/ classification

What is it about?

Unauthorized access (intervention) to personal accounts or devices and access control

Attacks on a person's accounts or devices in unauthorized way. It may involve unauthorized obtaining of information and/or access restrictions.

Online harassment/Cyberbullying

It consists of receiving repeated private messages and/or public comments, in an unsolicited manner, from one or more people through various social networks or messaging applications, where the messages are annoying, disturbing, or intimidating (this may or may not be sexualized)

Stalking or stalking: harassment through repeated actions of harassing the victim. Threats – online harassment.

Hate Speech

Speeches that incite discrimination, violence, exclusion, intolerance and hatred, and that express contempt, denigration, objectification, insult, threat or attack against a certain person due to identity characteristics (gender identity or expression, social class, disability, race/color, etc.).

Use of aggressive language and insults and Discriminatory Expressions

Receive insults and comments that they seek to harm, attack, belittle and exclude a person or group from a space.

Acts of distinction, exclusion, restriction or preference that nullify or impair the recognition, enjoyment or exercise of any right of a person or group of people.

Coordinated Mass Attacks

Mass organization and execution of coordinated “campaigns” or “strategies” to attack and discredit a person.

References: several sources

It is also important to consider that every woman can be at risk of online victimization as an extension of gender-based violence exercised outside the Internet. Thus, it has been proven that the simple fact that women and girls use and are active on different digital platforms (particularly social networks) puts them at risk of being victims of gender-based violence. However, “this increases when they actively participate in the digital debate or political life, or when they speak out in favor of human rights or gender equality” (MESECVI-UN Women, 2022).

The consequences of digital gender-based violence have been documented in several works and should include issues such as the impact on personal and public life, in areas such as their identity, dignity, physical and emotional integrity, but also in the exercise of their freedom of expression4, to the extent that one of the reactions may be to suspend their accounts or profiles or simply not to participate anymore in the space that social media platforms also offer (Citron, 2014; CIDH 2002).

In recent years, the concept of Political Violence against Women, defined by the Inter-American Model Law on Political Violence against Women5 as: "any action, conduct or omission, carried out directly or through third parties, which, based on their gender, causes harm or suffering to one or more women, and which has the purpose or result of impairing or nullifying the recognition, enjoyment or exercise of their political

4 See joint statement of freedom of expression rapporteurs, on "Freedom of Expression and Gender Justice", IACHR-UN, etc. (2022) at (available only in Spanish). IACHR-UN, etc (2022) at (available only in Spanish). 5 Organization of American States (2017). Model Inter-American Law to Prevent, Punish and Eradicate Violence against Women in Political Life. Inter-American Commission of Women. Follow-up Mechanism of the Convention of Belém do Pará,

rights", can also be physical, symbolic, psychological, etc. In this sense, the use of social media platforms are strategic spaces for women who have a voice and active participation as leaders, activists, communicators, journalists.

Chilean context

In Chile, we do not have systematized or official data or statistics on situations of Digital Gender Violence, and neither do we have policies that allow us to dimension, systematize and manage cases or complaints. Some studies6 carried out in Chile, as baseline or exploratory diagnostics, based on surveys or analysis of social networks (mainly Twitter), show that between 70% to 80% of women are affected by DGV and that these are experienced and confronted individually, i.e., they are situations of violence that are experienced without necessarily having the possibility of having a support network or being able to share it in a community. A good part of what is happening now with this type of case is that they also become known through punctual contact with other people.

Nor is there an institutional framework to deal with this problem, or public policies focused on its prevention, punishment, or reparation, or mechanisms to monitor and follow up on the complaints and cases that arise and thus generate systematized data with principles and one that allows generating a dynamic response and support to the victims of these situations.

In the last five years, two bills have been introduced that seek to punish and typify situations of digital violence, which would imply the development of more focused public policies: (i) a bill project that outlaws, criminalizes, and punishes digital violence in its various forms and grants protection to the victims (in process in Chilean Congress since January 2020)7, ii) the bill on women's right to a life free from violence, which includes provisions addressing some manifestations of online violence, including unauthorized dissemination of intimate images and audiovisual recording of a person without their consent as a form of sexual harassment (in process in Chilean Congress since 2017)8.

Despite these legislative advances, the biggest challenge is to be able to create timely information and communication systems among women who are victims of

6 Fundación Datos Protegidos (2018), “Gender Violence on the Internet in Chile: Study on the most common behaviours of digital gender violence and the intervention of criminal law”. Available at

Human Rights Centre of the Faculty of Law, Diego Portales University (2019). Annual Report on Human Rights in Chile 2019. Available at anual-sobre-derechos-humanos-en-chile-2019-2/

7 The bill can be consulted in this website of the Chilean Congress 928-07

8 The bill can be consulted in this website of the Chilean Congress

these situations and also to have evidence of the situations in order to the strengthening of public institutions and women's organizations or groups to prevent, alert and monitor these situations.

Towards a conceptual proposal on DGV and an AI technology solution

As occurs in the social problem of gender violence in general, some several diverse needs and details require the coordinated action of different actors and organizations, ranging from the provision of guidance and information on what to do when faced with a situation of DGV, to the creation of mechanisms and reporting systems, and certainly a system or platform for support and accompaniment (legal, psychological and emotional). In most Latin American countries, and particularly in Chile, there are not many civil society organizations or public institutions dedicated exclusively to the investigation, reception of cases, support and accompaniment.

In recent years, there differents projects creating or developing solutions based on artificial intelligence or automated technology, to prevent or address situations and cases of gender violence in general, from situations of intimate partner or domestic violence or street harassment, from public sector or non profit organizations. Most of these proposals seek to be a tool or resource to support aspects such as reporting a case, the stages of guidance, and delivery of information on what to do when experienced or even to make predictive patterns on these situations from databases that already exist in the public system of complaints9.

The incorporation of technological solutions based on artificial intelligence in the problem of gender violence is aimed at enhancing the design and development of databases, which are often not properly coordinated, in order to improve or automate the care systems to the extent that they need to optimize call volumes or contacts of women who are facing these situations and use this data to develop models or patterns that help prevent, identify risk groups or more vulnerable profiles or improve protection mechanisms for women.

In general, the most formal and common action to deal with such a situation is to report it on the same social media platforms. These companies have generated mechanisms or automated systems for internal reporting and complaints, that are based on automated procedures and on the examination of the rules or norms that each platform has to regulate situations such as cyberbullying, harassment, or other violent actions. These types of procedures have the following shortcomings: i) there is no system to follow up on the complaint or denunciation, ii) they do not guarantee that the attacks or violence will cease since the accounts of the aggressor profiles are not necessarily deleted or canceled immediately, iii) although they allow blocking an account that has been identified as an aggressor, this does not prevent the possibility of this situation continuing through the activation of another account and iv) there is a

9 Some of these developments are the "Violetta" chatbot project in Mexico that provides support to prevent and detect the occurrence of situations of intimate partner violence. Some other projects, some other projects have been included in a brief chatbots benchmark analysis in this research paper.

The platform “Gender Violence Unit (Sistema VioGén)” of the Spanish Secretary of State for Security has incorporated advanced analytics and artificial intelligence software of reports of gender violence in order to predict repeat assaults (for more information see this article in the newspaper El Pais ).

lack of transparency and information to users on how to avoid being continuously exposed to these situations.

For this research proposal and development of prototype chatbot or conversational agents (C.A.) that are being perfected as automated systems of human conversational behavior: "Conversational AI systems have many names depending on their capabilities, domain, and level of embodiment. These terms include automatic agent, virtual agent, conversational agent, chatbot, or, for very simple systems, bot (...) We use the term Conversational Agent (CA) to refer to systems that have a Conversational AI component and have other features such as a user interface (UI) to facilitate interaction and server-side features such as the app logic and the database" (Ruane, Birhane, Ventresqueo, 2019).

Also there are experiences documented in academic studies about initiatives promoted by feminist hackers activists, to create feminist chatbots, that allow to enlarge the feminist technology toolbox but also powered organizations, collectives, and/or individuals employ algorithms for feminist ends, and to change the dynamic machine-learning algorithm traditionally used by technological companies (Toupine, Couture, 2020).

We also want to take into account the proposal of feminist principles for Artificial Intelligence developed by Juliana Guerra (2022), which raises critical issues in relation to previous questions in the development of any solution design, application, or technological tool based on AI.10

Table 2. Summary of feminist principles' framework for AI



Collaboration, participation, autonomy

Before starting, understand AI as a result of collective processes. Identify, through a collaborative process of knowledge exchange, what needs an organization or community wants to satisfy with an AI system

Community development, situated knowledge, resistance

Determine, based on the recognition of the context, capabilities and available resources, whether an AI system is the best way to satisfy the identified needs

Care, resilience, co- responsibility.

Agree on expectations, roles, timing and commitments at different stages of the development and deployment of an AI system

10 Summary of the proposal by Juliana Guerra (2022) “Towards a feminist framework for AI development: from principles to practice”, Feminist IA+ Network.

Usage, intersectional view, consent

Recognize, within the previously agreed working group, possible risks, opportunities and conflicts associated with the development and deployment of AI systems in their different stages

Free and open source, movement building, privacy, governance

Define, in a conscious and informed manner by the organization or community, the resources, tools and services to be used for the development and deployment of the IA system

Source: Guerra, J. (2022) “Towards a feminist framework for AI development: from principles to practice”, Feminist IA+ Network.

  1. Benchmark of other chatbots

During the research process, we reviewed some of the various development proposals in the field of apps and chatbots that have been designed and created in the last decade with similar objectives in relation to reporting, denouncing, providing guidance and accompaniment in situations of gender-based violence, in general, ranging from situations of intimate partner violence to digital harassment. Designed and managed by a variety of organizations, from feminist collectives to other initiatives of public institutions, there is also a variety in models of design, use or application of

  1. technologies or software and funding - sustainability.

In the following, table 3 we present a comparative benchmarking exercise of four projects for reporting and accompaniment support systems that are active and operational to date. Three of these are in the field of gender-based violence (including digital violence) and one is related to another issue. This benchmark analyzed in particular their considerations in terms of application of feminist principles and considerations in terms of privacy policies and data use.

The four chatbots projects analyzed were: Maruchatbot, Soy Violetta, AcosoOnline and SocorroBot.

Table 3. Benchmark of Chatbots


Platforms where is available (architecture design) and language

Main objective

Feminist Principles included

Privacy & data protection

Maruchatbot (since 2020)

Chatbot online website and a website with information Available in English

To support and empower people who are experiencing, witnessing or fighting online harassment by providing real advice and resources from experts and activists.

  • Co design (with young activists)

  • Consider barriers people may face in accessing chatbot

  • Has a language empathetic and inclusive

  • Doesn’t reinforce stereotypes

  • Design Reflect on biases that might exist in the team

  • Represents global perspectives

  • Indicated that it is a chatbot and not a real person.

  • The system operates entirely on a web platform (it does not use other messaging type applications)

  • The system does not request any kind of identification for the person user (nor email nor phone number).

Violetta (since 2022)

Website (require a login via mobile phone)

Chatbot WhatsApp @Violetta

Facebook and Instagram Available in Spanish

To be a “digital confident” Detect, prevent and stop violence within any personal relationship.

  • Indicated that it is a chatbot and not a real person.

  • Artificial Intelligence (AI), it used to detect keywords or emojis that indicate that the user is probably going through a crisis of violence.

It has an explicit privacy and data use policy (available in PDF format) To interact with the chat requires logging in using a mobile phone number.

The chatbot in WhatsApp interacts with mobile numbers.

Acosoonline (since 2017)

Website Chatbot Telegram @acosoonlineBot Available in Spanish and Portuguese

Also, Instagram and Twitter

To support and empower people who are experiencing, witnessing or fighting online harassment by providing real advice and resources from experts and activists.

  • The chatbot provides information/guidanc e on cases of non- consensual dissemination of intimate images, either on how to make a case report or how to seek legal advice.

  • The chatbot indicate is a bot and no a person

  • Refers to other project partners in Latin America (and Europe) for specific information (legal, etc.) and accompanying guidance for each country.

It has a privacy and data use policy (for different countries). The chatbot in Telegram interacts with mobile numbers.

SocorroBot (since 2019)

Website WhatsApp Chat @SocorroBot Available in Spanish

To guide those who are searching for missing persons in Mexico, providing clarity on how to navigate the administrative and legal barriers that may be encountered.

  • The website

is a complement to the chatbot

  • It is indicated that it is a chatbot and not a real person.

  • A data search engines help both in the denunciation and reporting and in the case of enforced disappearance.

  • It has a data usage and privacy policy, terms and conditions and terms of use.

  • No personal information is requested when interacting.

Key Findings:

  • The chatbots included in the benchmark analysis do not function only as an independent development that operates from or in messaging applications, but are associated with web-based platforms that allow them to be enhanced with other tools and information resources, with a more precise or complementary orientation to what the chatbot can deliver autonomously.

  • The development of collaboration strategies or alliances at the civil society- academia level is key to the management and sustainability of the projects.

  • The process of implementing co-design strategies allows for the identification of specific needs and problems, but also for adjustments to be made to the primary ideas of the teams that develop and manage them.

  • Chatbots are starting to apply software or A.I. technology to enhance the chatbot's autonomous interaction, and to systematize the information and data (keywords or even emojis) they receive through complaints or reports, but they do not necessarily make this information visible.

  • These projects seek to be autonomous in their management, so as not to depend on public institutions or the regulatory framework, which tends to be weak or non-existent in cases of gender-based violence.

Methodology : Applying feminist principles in the research with women who have experienced DGV situations

In order to develop a technology-supported response system for women who have suffered episodes of digital gender-based violence with a feminist perspective, the entire creation process must be carried out from this same perspective. Therefore, the methodology used for the design of the prototype solution was developed based on various feminist principles that have been discussed by authors in the fields of user experience research (UX Research), interface interaction (Human-Computer Interaction HCI), and participatory co-creation methodologies (see for example Bardzell, 2010; Bardzell & Bardzell, 2011; Hope et al., 2019).

Likewise, the creation and development of the solution will be guided by feminist principles provided by various authors. Among them, we find the work of Juliana Guerra (2022, Op cit). Also, we have used the guide Feminist Design Tool Defensible decision-making for interaction design and AI" created by Feminist Internet and Josie Young, which guides through crucial questions on how to approach the design of solutions based on artificial intelligence and technology from a feminist perspective.

Within the qualities collected as necessary for the development of feminist interaction design, pluralism, participation, advocacy, and embodiment stand out for this research (Bardzell, 2010). These concepts allow us to rethink the design process from a critical position of resistance to the totalizing or dominant points of view commonly heard to open up to a plurality of voices and experiences.

To this end, the design process must be participatory and consider from its foundations the co-creation with women who have suffered these episodes and usually are made invisible, and not only with people or institutions that are experts in the subject or the perspectives of the researchers in charge. The latter requires a reflective stance on one's biases and position as a researcher to avoid reproducing inequalities in the practices applied to research (Hope et al., 2019). Likewise, co- creation with women, who will be the potential users of the prototype to be developed, allows focusing the agency not on the designers but on the interface users.

All the above is based on the perspective that all knowledge is socially situated, and therefore, methodologies must be generated to collect it without excluding or invalidating experiences ethically. This also implies taking account of the power

dynamics in which the research is embedded and, from reflexivity, expanding the methodologies that allow the inclusion of voices usually marginalized to give a platform to their experiences, needs, and desires (Bardzel & Bardzell, 2011).

Research methodology

Based on the theoretical and political principles described above, the design of the DGV response system was based on a two-phase qualitative methodology. (1) First, and as a central research edge, it considered the conversation, interview, and use of participatory methodologies with women who have suffered experiences of digital violence and feminist organizations that work in this problem. (2) Secondly, it included interviews with official institutions in the country and other stakeholders related to this type of violence and digital crimes and with experts on the subject. These phases and their main results will be briefly detailed below.

  1. Phase 1: Co-creating with women and feminist organizations

    1. Feminist UX Research Methodology

Our approach was through in-depth, semi-structured interviews. These interviews were directed to potential users of the platform. In these interviews, we learned about their experiences, needs, desires, and problems they faced at the time of receiving and reporting digital gender-based violence.

Our goal was to approach the interview from emotionality and experience, in order to distance technology from that skeptical and neutral space from which it is often seen. We wanted the interviews to be respectful of the experiences of each one, avoiding asking for a raw and harsh account of what had happened. That is why we generated three dynamics that we shared with the users:

  1. Head, body, hands: The first activity was used as an icebreaker. In this activity, the women approached the problem and their own experience, but from the body. They expressed everything they felt and thought about the subject. They asked themselves three questions: What do I think of DGV (head), What do I feel (heart), and What do I do (hands)?

  2. Journey: This consisted mainly of co-creating with women the journey they lived while experiencing digital gender-based violence. It was not just trying to reconstruct the experience but also bringing up the associated feelings, the people or institutions involved at each stage, and their needs at each moment of the process. The classic journey map canvas is composed of rows and columns. In our case, the first row corresponded to the actions (What happens to a woman that experiences gender-based violence online?). The second referred to the needs (What are her needs?). The third corresponds to the points of contact (Whom does she interact with?), and the last one to the feelings (What are her emotions?). As for the columns, based on what was exposed in the literature, they presented the different stages of suffering a digital gender-based violence episode. They were composed of: (1) interaction in social networks, (2) receiving violence, (3) facing violence, (4) follow-up actions, and (5) report and conclusions. They were intended to be broad

enough for women to tell their own experiences without feeling constrained by the abovementioned stages. Diagram 1 shows this map.

  1. Final ideas: The last part was intended to be more divergent and conclusive regarding what women considered essential for a platform to denounce and report their cases. It asked: Thinking of the time you experienced this episode, How do you imagine an ideal system of reporting this situation, through what medium or platform, what guidance would you have needed, what kinds of support, etc.?

Five women with public profiles, such as artists or activists, were interviewed. The interviews lasted about an hour and were conducted through the Zoom platform. In addition, the Miro platform was used for different activities. Miro is a virtual whiteboard where you can post, annotate, and make tables between other tools (see Annex 1). Then, the interviewer made notes on the activity boards as the women told their stories. On the one hand, this allowed the women to understand better the different activities with visual support that allowed them to enter into the dynamics. On the other hand, it allowed the women to corroborate what they were saying.

Diagram 1. Journey map of Digital Gender-Based Violence

  1. Preliminar findings

The interviews were challenging because sensitive, even traumatic, episodes were relived. This is why it was essential to empathize with the stories and guide the interviewees to places important for what we were creating without going into details that would generate discomfort. Even though it was very exploratory, it was constructive in understanding in what context, under what circumstances our solution would be included, and what it would be like.


The main result we expected to find was a journey of how women acted when they faced a situation of digital violence in order to understand how our Artificial Intelligence approach has to be. This is why, for us, it is fundamental to know what and how it happens concretely in that experience and the affective and emotional part that underlies any situation of digital violence. This is to understand that our technology solution must be close to the reality of women, what they feel, what they think, and what they do in these situations, making this a solution focused on women and the community they are part of. For this purpose, we summarize the journey maps in a single one (Diagram 1) with different layers. The first corresponds to the stages and decisions the affected woman took, the second to some annotations or points of contact, and the last to the different feelings that run through her experience.

Some critical and divergent points among the users are that once they receive violence, some decide to respond. However, others are paralyzed by fear and find it difficult to act or confront the situation. It is also essential to understand that many struggles openly discuss this situation. Hence, they only discuss it with people close to them, such as family members or friends from their intimate circle. Generally, they are the ones who alert them that the situation is complex and that action must be taken in some way. In any case, some survivors decide to try to continue with their lives and do not make any act of denunciation.

Undoubtedly, situations of digital violence make all women modify their behavior in social networks. Some of these restrictions are: Some of these restrictions are: In this same line, women tell how to begin to restrict what they publish, stop answering internal messages (Direct Messages), stop uploading some types of photos, or make private the accounts that they can make private.


Within an inefficient and invisibilizing system of digital gender-based violence, one of the main feelings of the women is that they felt very lonely and helpless because of what happened to them. In addition, some described feeling paralyzing fear and even being shamed about what had happened to them. These feelings did not allow them to act clearly; they did not know whom to turn to or how to proceed with a complaint.

The women who sought support often did so because they told their close circle, who alerted them that they should ask for help. They went to different institutions or

people who could guide them on what to do. These can be summarized in three main ones. How and where to report the crime, where they contacted lawyers or the PDI directly. They also sought psychological help, and finally, one interviewee also sought help with digital security, as she wanted to know how to protect herself from future attacks and her accounts from being hacked or usurped. The above is important because women with a public profile often can not close their social networks, as social networks are part of their work.


We identified that for many survivors, a critical moment arises from socializing their experience. First, they felt humiliated to share their experience with close circles since, in some cases, they had been sexually harassed. In addition, linked to what we had already identified, survivors often felt that no one understood them - not even their closest circle since they had not lived through what they had experienced - and that they also felt helpless or unheard. These feelings brought up the idea of creating an online support community for women. This idea was born from the participants themselves. The aim was to feel understood and accompanied by other women who had gone through the same thing so that they could share their experiences, listen and give recommendations. Some of them even mentioned that after having lived an experience of online digital violence, they would like to share their experience with other women, so they do not go through this.

Phase 2: Asking other Stakeholders

To complete and reinforce the previous phase, we conduct seven interviews to different stakeholders related to the problematic and the area of gender violence and digital gender violence, considering all of them shall be strategic allies and collaborators for the next steps of the project 11.

We present in the following Table 4 a summary with the main recommendations:

11 The interviews were done under consent informed. The guideline of the interview consist in three main topics i) her/his/its expertise and relation with the problematic of gender violence in general and digital gender violence in particular, ii) her/his/its approach to assess challenges and opportunities to develop technology - based on IA solutions, iii) recommendations to the design process of a DGV system alert based on a chatbot.

Table 4. Summary of main recommendations for the design process of the DGV chatbot system


Recommendations for chatbot prototype development - DGV monitoring platform

1) Sociologist expert in social network analysis (data analytics), hate speech and sexism. Counterpart of the research "Being a politician on Twitter", on harassment and digital violence against women representatives in the Chilean constituent process.

  • When developing an A.I. such as a chatbot, it is essential to define very well who it is aimed at, what it is for and to manage the expectations of what the user is going to get.

  • You have to listen to the women who are going to be the users, to the people, to know their needs and for it to be a process of co-construction.

2) Expert lawyer on DGV and personal data, actually director of NGO that published the 1st study about Gender Internet Violence in Chile

  • Ideally, A.I. should be used to gather statistics and ideally in alliance with the Public Prosecutor's Office or state organizations.

  • To promote incentives and alliances to support the creation of projects and programs of help, guidance, shelter and case accompaniment for women (as is happening in other Latin American countries).

3) Cyberfeminist Activist, actually coordinator of a project about online harassment

  • Define what you want to create the chatbot, with A.I. technology for what you want to monitor and for what purpose.

  • Also define very well what your needs are and for what purpose this bot would be used. One way of doing this could be to measure or provide alerts about each attack that this person receives, which could be used to generate a record and then data to identify behaviors, such as what type of comments generate triggers, for example.

4) Chilean representative for Committee of Experts of the Follow-up Mechanism of the Inter-American Convention on the Prevention, Punishment and Eradication of Violence against Women, "Convention of Belém do Pará" (MESECVI).

  • It would be advisable to generate a preventive protocol: what appears unusual in your digital activity that should set off the first alarms.

  • Then, what to do when you perceive that you have been the subject of public debate (abortion, violence, etc.), and provide recommendations for protection and prevention, such as changing your profile to private, lowering your activity, etc., paying attention not to generate fear.

5) Feminist journalist, activist, communications officer in feminist organization. Counterpart of the research "Being a politician on Twitter", on harassment and digital violence against women representatives in the Chilean constituent process.

  • To create a system of alert like a “traffic light”, that allows to share this information when there are harassment or cyberbullying cases to alert other women, people and to present or coordinate collective actions.

  • Provide recommendations on what to do in cases of harassment and violence. They can be standard responses to X situations on the platform.

  • It also highlights the need to have a gender approach to monitor and update digital platforms.

6) Consultant Lawyer Specialized Unit on Human Rights, Gender Violence and Sexual Offenses, National Public Prosecutor's Office of the Chilean Public Prosecutor's Office

  • It would be a contribution to generate data and make the issue of DGV more visible. Quantitative data is necessary to justify, make visible and request resources. In addition, the data could be used to generate cross-referencing information and even "intelligence", taking into account certain accounts and profiles that have an established behavior.

  • If AI makes an early detection, it could allow triggering a complaint and immediately request data preservation from social media platform companies (as an early action of preservation, capture and preservation of information as evidence of crime).

  • Provide guidance to those who suffer these situations: what the crimes are and what and how to proceed. "If the person thinks that a complaint is going to get someone thrown in jail immediately, they are going to get a lot of slammed doors".

7) Prefect of the Cybercrime Brigade, Chilean Investigative Police (Policía de Investigaciones de Chile)

  • In the case of developing an automated chatbot it is very important to manage the expectations of the people who use them and, on the other hand, to point out that the fact of communicating is not the realization of the complaint itself,

  • We need one of the bills on digital violence to be passed in order to have a criminal figures investigate and that allows us to carry out an investigation and to prosecute this type of crime.

Key Recommendations

  • To consider from the beginning the needs and context (experiences) of potential women users.

  • The chatbot should help to mediate as the first step to get orientation and guidance about what is digital violence and how to report and share what is happening.

  • I.A software or technology should be used to improve the data generation from reported cases (from keywords, emojis or any other type of text input) in order to create some patterns or trends associated to: frequency of attacks, behaviors associated with harassment profiles, generation of preventive alerts (a kind of “traffic light” figure). This is the potential to enhance it as a monitoring system or platform

  • As a system, the potential to create protocols concerning access information to face and report with the alternatives available (even if there is no response from the judicial-legal system), but also to create preventive protocols and the idea of a community - a network that provides help and accompaniment.

Presentation of the proposal: towards a feminist chatbot prototype

The following diagram - figure 2 below shows a basic proposal for the design process of a system for reporting and monitoring situations and cases of digital gender- based violence, based on the development of a chatbot that interacts with the person from an initial point 1 of the case report and in 9 stages of interaction flows.

The workflow summarizes stages 1 to 7 of the process, followed by a written proposal with the 9 steps of the interaction between the chatbot and the person (woman) who is going to use it and interact with it.

It also presents some general considerations that have been incorporated from the suggestions and contributions collected in the 2 phases of previous research and that detail the basic conditions and characteristics of its programming, defining its impact and scope.

Diagram 2. Workflow

  1. Description of the Workflow

The flow/process of interaction has seven steps, detailed as follows.

—-Step 1: Reporting

Scenario: A woman who has suffered a DGV attack or situation, due to her active social media profile, connects to the web platform where a chatbot dialogue system will be activated and invites her to tell about her experience.

Introduction /Chatbot presentation

Greeting and presentation explaining what a chatbot is and its objective to provide support in the process of reporting a DGV case with guidance and information about these situations, and in relation to the legal and emotional support that may be required (in empathetic and unbiased language).

Step 1.1

Chatbot ask: What happen to you? (Open Question) Open space to write (established a limit of characters)

> Chatbot will suggest types or DGV (shows or link to website with information)

Step 1.2.

Chatbot ask Can you specify, in what social media or web platform happened? Select from these alternatives









Step 1.3 Chatbot ask

This attack/ situation occurred in public/ in private? Select

  • In public (in my social media profile/ comment)

  • In private (private or direct message)

Step 1.4 Chatbot ask:

When did this happen? (could be open space or to use a calendar box to select )

Step 1.5

Chatbot ask if you can upload a file or screenshot of the attack/ situation

–Upload here the file

Step 1.6

Chatbot ask if you can identify the profile account of social media or other identification

Step 1.7. This first dialogue ends with empathetic invitation to learn and know about DVG

Outputs to obtain from Step 1:

  • Database of cases reported > visualization (data viz)

  • Systematization and training via A.I. to identify keywords used to classify the type of DGV, evidence and profile associated with the attack.

STEP 2 GUIDELINES FOR REPORTING in social media platforms Step 2.1

Chatbot ask> If the attack occurred on X social media platform, I recommend that you

can report the situation on this link of the platform.

I.e. Where to report on Instagram? Go to

Step 2.2

Chatbot ask if more information is needed > link to article in website or tutorial video

STEP 3 GUIDELINES FOR REPORTING TO THE POLICE/ Investigative Police of Chile (Cybercrime Division)

Step 3.2. Chatbot explains >

In Chile, this type of case is not classified as a crime, but we recommend that you report it to the Chilean Investigative Police, through its cybercrime division

> link to website with emails and phones numbers

Step 3.3. Chatbot explains >

Or you can file a complaint on this online form of the Public Prosecutor's Office / Public Prosecutor's Office >


Step 4.1. Chatbot asks >

Do you want me to give you support information / legal guidance in Chile? YES > leads to website link with more information of organizations


Step 4.2. Chatbot asks>

Do you want to know the legal framework that could help your case? Remember that digital violence and its different forms of occurrence are not a crime in Chile to date.

> The botchat links to the website with access to more information about the legal framework


Step 5.1

Chatbot asks > Do you want me to give you emotional support / counselling information?

YES > leads to more information on the website with list or links to psycho-emotional support organizations



Step 6.1

Chatbot asks > Do you want me to support you with guidance on how to improve your digital security?

YES > take you to a downloadable FDP guide or recommend NO

Step 6.2.

Chatbot asks> Do you want to receive information on running workshops / talks on digital security?

YES > leads to a downloadable guide / video tutorial or recommends other organizations with guidance (e.g.



Step 7.1 Interaction on follow-up of the case/report

Chatbot asks > Do you want us to check in with you to ask if the attacks or harassment you have received are continuing?

YES > deliver contact email NO

Step 7.2.

Chatbot asks > Do you want to receive information about workshops or talks about digital violence against women or your digital security?

YES > links to a website link where you can find information on upcoming workshops and workshops.


STEP 8. Connecting to an online Community Step 8.1.

Chatbot asks > Do you want to be part of a community of women who have experienced the same as you?

YES > go to the community registration system on the website. NO

STEP 9 Closing dialogue/interaction with the person—------

  1. Technical, Social and Ethical Considerations of the Workflow of the system

    1. AI input and technology decisions

The system will operate on a web platform, on which the first chatbot prototype will operate. After testing, it will be defined if this chatbot will operate associated with a messaging system or just using a web platform.

The design process will be based on usability and user research (UX) based on simulated profiles or real cases. Prior to the development of the project, a preliminary test will be carried out with a group of women who have been previously contacted and who have experienced situations of DGV in their social networks and are part of feminist collectives and organizations.

The archetypal user profile considered for this proposal is based on the user research already detailed above: a woman who receives or suffers some DGV attack in her social networks and is very active in her social networks, with such opinion leaders, informants on different areas of the social/political contingency: from rights activists, communicators and journalists to academics who influence public policy.

To process the information of the report case, an automated system with AI will be used to create a database based on the reports that allow, in the first stage, to learn to distinguish: i) types of attack, ii) keywords associated with the situation of harassment/aggression/attack, iv) accounts or aggressor profiles, iii) frequency of cases, iv) recurrence of cases.

As a strategic step/ goal will be use A. also to start to create alerts when an attacker profile can be identified and/or occurs cases of coordinated attacks or harassment to women in social media under sociopolitical contexts of crisis or conflicts.

This database generated by the reports will allow the automated creation of data visualizations that will be made available to the public.

The development and programming will preferably use open-source software, allowing the use of resources that have already been developed for the programming and automation of chatbots.

Data, Case Report and Privacy

Concerning the case report of an attack or DGV, you will be asked for An approximate date of occurrence of the situation or attack.

The social media / social network platform where it occurred.

Attach a screenshot or link detail of the profile or account that attacked you.

A basic record of the attack (e.g., screenshot or photo of the attacker/harasser message).

In case the women need/ request it, this report will be available for downloading and to use as evidence, to file a report to the police or the judiciary, or a complaint to social media platforms (in their reporting mechanisms for these cases of harassment or cyberbullying).

It will ask for a name/surname (not necessarily real one), contact email, age range, occupation profile and city of reference, only for purposes of design the database and report.

A data use and privacy policy will be included, written in an empathetic and accessible way, to explain how the information and data that the system will request for case tracking and reporting processes will be used.

System impact and scope

The system will have a previous process of dissemination campaign that will warn that the system is active to receive cases and deliver basic guidance.

The impact of this prototype seeks to

  • Develop a database model on DGV in Chile that allows starting categorizing and quantifying the occurrence and recurrence typologies of cases and allows monitoring of these cases.

  • Develop as an associated product of open data visualizations (data viz), available for downloading and use by researchers, journalists, and other interested parties.

  • Generate reports that can become an instrument to support decision-making by different stakeholders/decision makers regarding public policies, the generation of advocacy actions, or initiatives or collaborative alliances/support from civil society and other feminist organizations.

Final reflections and next steps of the research

The process of this research paper has allowed us as a team to tackle in depth one of the problems that has increased significantly during the recent years of sociopolitical crisis and pandemic of the coronavirus: digital gender violence. This is a type of violence which is dynamic and varied in its forms of occurrence and social impact, and it is part of the structural violence that women have to face. This research has also allowed us to take the conversation beyond the legal-institutional aspect, which focuses the solutions only on the existence of a legal framework and/or public policies that address these issues. The experiences we have researched and learned about in the process have reaffirmed for us the importance of the spirit of feminist projects: autonomy and the central value of collaboration between organizations and people who contribute knowledge to their development.

Originally, we set out to develop an alert and monitoring system for digital gender-based violence on social networks, specifically on Twitter. This decision was based on the idea of being able to develop an automatic attack detection system, but during the period of research, the platform radically changed its algorithms and conditions of use, following the purchase by an entrepreneur in the technology area. The figure of having a chatbot as a conversational agent was present in the proposal, as a complementary development.

During the process of support received from the coordination of the Fair+ Network, as well as in conversations and dialogues held with specialists from different disciplines and civil society, as well as the women themselves, we were able to narrow down the work and focus it on the proposal of a chatbot design as a tool to articulate the monitoring and reporting system, which also facilitates the delivery of information and guidance. This modification was also intended to recognize women's agency in identifying episodes of digital gender violence, so they can turn to the platform when needed.

The challenge for the development of conversational agent systems such as chatbots lies in the capacity of the AI to be designed and developed not only to "humanize conversations" based on natural language learning or the use of symbols such as emojis but also to generate patterns for the autonomous identification of situations of attack, harassment, and situations associated with digital gender-based violence. In this proposal, the chatbot's interaction with women facing DGV must recognize in a personal way that she is going through this situation and seeks help. For this reason, it is crucial to create communities where she can connect with other women and offer emotional-psychological support options to bring her out of the isolation she is facing and experiencing.

We emphasize that this is a system because, as we have identified in most of the cases of similar projects, chatbots do not work as a single response tool but in an ecosystem of elements and resources that interact with each other, where even the human factor continues or can continue to provide a central value in complex issues. It is in this sense that the chatbot as such needs to function on a web platform that allows the development of content and resources that will complement and generate inputs for the interaction processes of this tool, as well as enable the generation of outputs from the generation of data that is made possible in a system that receives reports or complaints of DGV.

The feminist principles and approach are central to the development of this project because it has allowed us to focus on a situated and co-design approach to keep in mind the questions: can a solution like this be valuable and useful to support and encourage a woman facing digital violence? Does it facilitate the provision of information and guidance on what to do in a situation of online attack, harassment, or bullying? How to design this interaction in a minor data and information-extractive way?

The methodology used for the research was, therefore, careful to include both theoretical knowledge of the general issue of DGV and situated knowledge from the use of a feminist UX Research process adapted to listen to women who shared their situations, experiences, testimonies, and experiences of facing with DGV situations, as well as people who shared their knowledge, opinions, and visions, as they are part of a network of collaborators and allies who remain critical in articulating comprehensive responses.

Having completed this research project, we want to mention some of the next steps to be followed to develop the prototype proposal.

  1. To seek funding for the development and design phase of the prototype itself and its subsequent testing phase.

  2. To have a team of collaborators specialized in the development and design of chatbots.

  3. To be able to develop and adjust the prototype, defining all aspects of technological development and programming of the chatbot, including the definition of aspects such as using open-source software or other market solutions. For this stage the inclusion of programmers is crucial, and also, the usage of feminist IA guides for the development of the prototype. We already talked with an expert designer of chatbots, and she was in knowledge of

different ways of including a feminist and critical perspective in the programming of the tool. Also, we will inspire in other projects good practices, as the one we mentioned “Maru chatbot”, who's creators design it using openly feminist principles.

  1. Establish alliances of collaboration and cooperation of the different stakeholders included in this research (civil society, public sector, academia and others) because they are fundamental in the development of the prototype.

  2. Generate a formal agreement with the National Center for Artificial Intelligence - CENIA of Chile, who have committed to support and collaborate for a later development phase of the prototype.

  3. To continue to count on the collaboration of the network of feminist organizations and collectives on the defense of women's rights, because they are strategic allies.

  4. To turn this document into a paper that can be disseminated and published in academic and open-access publications.

Bibliographic References

Bardzell, S. (2010, April). Feminist HCI: taking stock and outlining an agenda for design. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1301-1310). DOI

Bardzell, S., & Bardzell, J. (2011, May). Towards a feminist HCI methodology: social science, feminism, and HCI. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 675-684). DOI

Feminist Internet & Josie Young (n.d). Feminist Design Tool: Defensible decision making for interaction design and AI. Retrieved from: 3753a41b4b88/2-10_FeministDesignTool_2.0.pdf

Fundación Datos Protegidos (2018). “Gender Violence on the Internet in Chile: Study on the most common behaviours of digital gender violence and the intervention of criminal law” Report . Available at genero-en-internet-en-chile/

Hope, A., D'Ignazio, C., Hoy, J., Michelson, R., Roberts, J., Krontiris, K., & Zuckerman, E. (2019, May). Hackathons as participatory design: Iterating feminist utopias. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-14). DOI

MESECVI-UN Women (2022). Report on Cyber-violence and Cyber-stalking against Women and Girls, in the Framework of the Belem Do Para Convention.

Guerra, Juliana (2022). “Towards a feminist framework for AI development: from principles to practice”, Feminist IA+ Network

Human Rights Council, United Nations (2018). Report of the “Special Rapporteur on Violence against women, its causes and consequences on online violence against women and girls from a human rights perspective”.

Ruane,E, Birhane, A., Ventresqueo, A. (2019). “Conversational AI: Social and Ethical Considerations”, in Proceedings for the 27th AIAI Irish Conference on Artificial Intelligence and Cognitive Science NUI Galway. Available at 2563/aics_12.pdf

Toupin, S., Couture, S (2020). “ Feminist chatbots as part of the

feminist toolbox” , in Feminist Media Studies, DOI: 10.1080/14680777.2020.1783802

Peña, Paz (2017). “Report on the Latin American Situation of Gender-Based Violence Exercised through Electronic Media”,

Available at: American-Report-on-Online-Gender-Violence-final.pdf


We provide a link with access to an online document of the UX Research process with: a) original diagrams of the steps of journey map and archetypal user profile a,

b) notes (transcriptions in spanish) of participant interviews.

Link here

No comments here
Why not start the discussion?