Skip to main content
SearchLoginLogin or Signup

Moving From Diagnosis to Change

This paper attempts to understand the system of technical requirements, epistemologies, institutions, organizational practices and cultural norms that shape human decision making within AI research.

Published onDec 17, 2021
Moving From Diagnosis to Change

I. Introduction

Although the benefits of artificial intelligence tools for development are potentially impressive,1 there are strong critiques of AI from a feminist standpoint. D’Ignazio and Klein,2 for example, have shown how biased data sets amplify gender and racial inequality, while Noble3 has written about how search engines reinforce racism and misogyny, and Eubanks4 has shown how IT systems automate inequality. However, there is much less literature on specifically what needs to change and how that can be done.

In order to help researchers who are contributing to a just AI, this paper attempts to understand the system of technical requirements, epistemologies, institutions, organizational practices and cultural norms that shape human decision making within AI research. We will consider pathways to more just data and AI by considering what has met with success in other attempts to advance gender equality. Finally, we ask what are promising directions for AI researchers and what support would be helpful to them as they consider and implement their plans.

II. Pathways for change in development contexts

Thinking about interventions for gender equality, we need to consider the questions of what we want to change and how to do it.

The “what” of moving toward gender equality means change in one or more of the four quadrants in the diagram below. In other words, to advance gender equality in a community or an organization requires attention to “formal” dynamics such as increased resources, better policy and other formal structures. It also requires attention to the informal dynamics such as consciousness and awareness of gender power relations and change of exclusionary norms. The framework invites us to look at both individual change and systemic change, both formal mechanisms and informal, normative dynamics,5 and to try to understand the nature of change and what particular change we are intending.

To look at the how of change we need another framework. Bennis et. al,6 identified three “families” of change strategies for human systems that work across persons, communities and organizations.

Their categorization of change includes:

  • Coercive—use of the law, policy or other means to pressure compliance. Examples include policies, quotas, legal reform, affirmative action, and women’s mobilization.

  • Rational—use of reasoning and evidence to convince people, including the provision of policy research, economic arguments (about how gender equality leads to other important outcomes), or the importance of girl’s education in boosting community level outcomes.

  • Normative—recognition of non-rational or attitudinal dynamics and resultant use of particular types of educational approaches. This approach includes educational efforts such as gender training or diversity training as well as broader efforts to change cultural norms in organizations.

The diagram below shows how these 3 strategies combine and some examples of action taken for a more just AI.

III. Gender data gaps as barriers to feminist AI

AI that aims to support good outcomes requires good data to ensure that AI outputs are efficient and effective in meeting developers’ intended aims, and to prevent harmful, unintended consequences. Similarly, feminist AI requires good ‘gender data’ including data that reflects and makes visible differences in the experiences, needs, opportunities, or contributions of women and men, girls, boys, and gender-non-conforming persons. Gender data can be quantitative or qualitative, generated through conventional methods such as household surveys and ethnographic analyses, or through novel means, such as citizen-generated data initiatives that use mobile phones, or through the large and unconventional datasets produced, for example, by social media platforms, real-time sensors, and electronic records (“big data”).

‘Gender data gaps’ have become a key concern among development institutions and organizations who use the term to refer to ‘missing’ or unavailable data as it relates to the UN Sustainable Development Indicator framework, or the drivers of gender inequality and how to create positive change. Various institutions have mapped gender data gaps across the Sustainable Development Goals indicator framework,7 shining a light for example on a lack of comparable, sex-disaggregated data on the number of people living below 50 per cent of the median income (SDG 10), or gender data on incidences of sexual violence (SDG 5). Indeed, national official statistics are not sex-disaggregated in many places (let alone disaggregated by other socio-demographic characteristics such as ethnicity, race, or disability).8 Official statistics, moreover, may not be collected on themes that matter to gender-responsive policy making, such as unpaid care, or access to maternal healthcare.

Gender data gaps, the lack of quality or useful gender data, and unused gender data exist for a variety of reasons. It is possible to make sense of the reasons why the gender data gaps exist using the above analytical framework and its four categories: consciousness and capabilities; resources; formal policies and rules; and informal norms and exclusionary practices. Making sense of gender gaps in this way may shed light on practical pathways to overcoming them.

First, even basic gender data may simply be ‘missing’ or unavailable. National statistical offices (NSOs), the bedrock of evidence-based policmaking and resource allocation, for example, may lack the mandate to collect and share a broad range of sex-disagregated statistics. In addition to the formal policies and rules that govern institutional priorities, this can be a problem of insufficient financial resources to produce gender data, which can also negatively impact the timeliness and frequency of the gender statistics that are collected. For example, the decision not to collect data on time spent on unpaid care, and on the contribution that unpaid care makes to gross domestic product, can reflect attitudes and beliefs about unpaid care being ‘women’s work’, natural, and of little value to economic systems.

Second, gender data may exist, but they may not always be useful for the purposes of feminist policymaking, programme design, or advocacy, for any of a variety of reasons, many of which have to do with quality. The discourse around gender data often conflates gender with ‘women,’ for example, and gender data with sex-disagregated statistics. The production of quality gender data, however, is not a simple exercise in ‘counting women’ – it requires methodological rigor and a deep familiarity with the ways that power operates in the lives of individuals–including men and boys, and gender non-conforming persons–in specific social, economic, political and geographical contexts.9

Gender data of poor quality can be a capability and consciousness problem. The generation of quality gender data requires researchers and data scientists to demonstrate a grasp of gender biases embedded in “raw data”; in definitions, classifications, and hypotheses; in research questions; in data collection methodologies; and in population samples.10 Not all data scientists, researchers and policy analysts have training in the production and interpretation of gender data and feminist theories of change, and thus not all gender data is quality– and therefore useful– gender data.

Poor quality gender data can also be a result of informal norms and exclusionary practices, exemplified in datasets with embedded biases and limited representativeness. For example, official data on femicides often significantly undercounts actual gender related killings of women, a dynamic that can be driven by impunity in the justice system, as well as discrimination against poor and racialized women.11

Finally, gender data may exist but not be used. When it comes to making use of novel, digitally produced data sets (‘big data’), lack of resources within feminist organizations and other social sector organizations sympathetic to gender equality can be a significant barrier.12 Feminist development organizations are chronically underfunded, and data scientists are expensive. This means that feminist organizations may be unable to afford necessary in-house data science expertise, which includes competing with tech companies that are able to provide much larger financial perks.

When it comes to digitally produced data sets, there may also exist poor communication and a lack of trust (often for good reason) between feminist organizations and data holders. Organizations oriented to the achievement of gender equality often have very low visibility into what specific datasets tech companies possess, the terms under which data might be responsibly shared, and what other data-relevant resources the tech community might be in a position to make available.13 Productive dialogue is needed to create a baseline of mutual understanding from which to design technologies and technology-supported interventions that produce and make good use of high quality gender data. These can be problems of formal rules and policies that prohibit close working relationships (e.g. non disclosure agreements, data sharing regulations), as well as problems of consciousness and capabilities insofar as it can be challenging to speak across disciplinary and institutional differences.

Lack of trust can also be grounded in differences in motive: feminist organizations are oriented towards the achievement of gender equality (and may have related or secondary aims of providing employment, saving the environment, improving literacy, making a profit, etc). In contrast, the corporations that produce novel datasets are typically motivated by profit and beholden to shareholders. Feminist organizations thus rightly have concerns about the extent to which tech companies are willing to uphold women and girls’ rights beyond ‘data for development’ initiatives, making the privacy, safety and security of women and girls a priority in their corporate products and services–including the data they produce and are increasingly apt to ‘donate’ to gender equality or development causes.

And finally, even when good gender data does exist, it may simply be ignored, signalling a gap in political will, and a problem of informal norms and exclusionary practices that deprioritize issues of gender inequality. There is perhaps no better example than in the case of gender-based violence, for which plenty of data has long existed– yet services that prevent and respond to GBV are chronically under-resourced, and often the first to be cut in times of austerity.

Lack of quality gender data– and lack of gender data use –are but two issues behind biased AI. For researchers invested in shaping feminist AI futures, data availability, quality, and accessibility are central to these efforts. Identifying the drivers behind particular gender data gaps can help motivate efforts to resolve them.

IV. Pathways to change: Examples from practice

Given these barriers, do traditional pathways for change work in this context?

Thus far, we have (a) considered theoretical frameworks for social change, and (b) examined gender data gaps as a barrier to feminist AI. Now, this section will turn to examples of feminist action that have resulted in the production of quality gender data, and/or feminist AI. We harness Bennis, Benne and Chin’s three “families” of social change strategies to categorize and analyze how individuals, communities and organizations are moving beyond the identification of a problem, to resolving it.

Consistent with Bennis, Benne and Chin’s model for social change, these examples illustrate the importance of finding a balance between coercive, rational and normative change strategies.

For example, in recent years feminist agricultural groups in Brazil have pushed for the documentation of women farmers’ production in order to make visible women’s contributions to Brazil’s economy.14 A lack of sex disaggregated data on smallholder production had previously made invisible women's production capacity, and thus systematically undervalued. Through the distribution of log books through which women can now document their daily production and sales, women farmers have gained access to financing for family farmers and are now able to participate in school meal discounts accessible from smallholder farmers. That is, we see how feminist agricultural groups used coercive strategies to include women’s agricultural production in national statistics. By addressing exclusionary practices, they were able to use this new data as evidence of women’s contributions to the economy (a rational strategy), and thus influence formal rules and policies (such as women’s access to state services).

IBM CEO Arvind Krishna’s decision to no longer develop, research, or sell facial recognition or analysis software provides another example of how diverse strategies are needed to create positive change towards feminist AI.15 Krishna’s decision represents a culmination of different vital pressures: (a) Researchers and journalists used rational and normative strategies to illustrate what facial recognition technologies do, and why the current use of facial recognition data represents a threat to society, human rights, and racial justice. In doing so, these actors also produced a clear platform from which (b) social movements were able to lead coercive strategies to demand change to formal rules and policy. In particular, the immense momentum of the #BlackLivesMatter movement and its mobilization around law enforcement played a significant role in pressuring companies like IBM to recognize their exclusionary practices and change the company’s policy on the issue.

If a holistic social changes approach is needed, then it is also important for feminist AI advocates to accurately diagnose gaps in strategies in order to effectively resolve those gaps.

Mimi Onuoha and Diana Nucera’s “People’s Guide to AI,” and the Allied Media Project, both provide an accessible introduction to AI and its impacts on society.16 As a social, educational initiative, Onuoha and Nucera’s work provides an example of how to overcome the consciousness and capabilities barrier through normative strategies. An analysis of available literature illustrates that there may remain significant gaps in this area. This is important, because without normative strategies (changing attitudes as well as minds why this issue matters) it is difficult build a strong social movement around feminist AI.

In November 2018, thousands of Google employees around the world left their offices to publicly protest their company’s handling of sexual harassment, following reports of million dollar exit packages for male executives accused of misconduct.17 As described in Data Feminism, the walkout represented growing rejection of tech companies’ lack of diversity and consequential elite, white, male biases in technology development—with lackluster sexual harassment policies providing one of many factors that continue rendering tech companies hostile for women.18 This walkout was particularly important as it illustrated the possibility of coercive strategies from internal actors, who may have greater consciousness around what needs to change and uniquely leverage coercive strategies to demand such change.

If these examples uphold long-existing theories of social change (ie: a diverse range of rational, coercive, and normative strategies are needed), then why does change in the sector appear so delayed? Patterns in the examples above help to illustrate the unique challenges facing social change strategies in the context of feminist AI.

  • Lack of clear policy asks to mobilize around. An initial review of literature on feminist AI and movements around feminist AI illustrates the need for clearer policy demands—a gap which is closely related to the high authority barrier identified above. Much of the literature on feminist AI discusses feminist AI “approaches” and “principles”, rather than advocating for a particular feminist AI policy platform.

Of the feminist AI change strategies analyzed thus far, we see that those with greater success in achieving formal rules and policy change started with strong clarity on what policy changes were needed. For example, the Brazilian government’s inclusion of sex dis-aggregated agricultural production data was a consequence of targeted coercive strategies via strong social movements. It is hard to apply pressure and coerce change without knowing what exactly needs to change.

  • High-authority barrier. Demanding more ethical and feminist AI development requires both external and internal pressures. External pressures require that those outside the organizations creating and using data (particularly tech companies) understand which reforms are needed and why. However, there is a presumed authority barrier—those who are “qualified” to work with data and data technology are presumed to hold prestigious educational degrees and “know more” than industry outsiders. This uniquely high authority barrier often limits rational and normative social change strategies, which consequently limits the possibility of external coercive strategies as well.

In recognizing this barrier for progress towards feminist AI, we can see the importance of (a) normative and rational strategies, such as Onuoha and Nucera’s “People’s Guide to AI”, and Josie Swords’ “Feminist Chatbot Design Process (FCDP)” and likewise (b) the need for internal coercive strategies (ie: coercion by those who can more easily overcome the high-authority/knowledge barrier) as was illustrated by the 2018 Google Walkout.

  • Tech’s embeddedness in our lives. Tech companies remain one of the principal spaces for creating change towards a more feminist AI. However, some argue that tech reform remains limited due to the unique involvement of tech in so many aspects of our daily lives. Because new technologies are becoming increasingly integral to our professional and personal lives, it has become increasingly difficult to boycott particular companies or build strong social movements (coercive strategies) around tech reform towards more feminist AI.

Given tech’s embeddedness in much of our daily activities and thus challenges for movement-building around feminist AI reform, educational initiatives (normative strategies) and clear, evidence-driven policy asks (rational strategies), and internal pressure (coercive strategies) are especially important. Furthermore, these pervasive barriers to change also indicate the need for open dialogue between feminist organizations and the tech community.19 Feminist AI will require tech companies to prioritize human rights and gender equality in all aspects of their work, not just via isolated data for good initiatives.

V. Conclusions, recommendations and research gaps

AI requires high quality and representative data sets in order to maximize responsible uses of AI, and good feminist AI requires good gender data. In order to better understand ‘gender data gaps’ as a principal barrier to feminist AI, we built upon the Gender at Work analytical framework to diagnose the formal and informal dynamics at both individual and systemic levels affecting the quality, availability, and use of gender data. We suggest that addressing these gender data gaps requires a clear identification of these root causes, which may be multiple and intersecting at once.

Building upon Bennis, Bene and Chin's "families" of change strategies framework, we discussed exemplary pathways for more just, feminist AI. In doing so, it became clear that while a balance of coercive, rational and normative change strategies were needed to resolve the barriers discussed above, there are unique strategizing barriers delaying progress in this area. Namely, feminist AI's lack of a clear policy platform, along with tech's high-authority barrier and deep embeddedness in our lives, present unique challenges for the feminist AI movement, and will need to be addressed in order to achieve more substantial progress towards feminist data collection and use.

Based upon this initial exploration, researchers committed to understanding the roots of biased or ‘bad’ AI, and identifying pathways for the development of feminist AI should:

  • Recognize the diverse ecosystem involved in data production and its use, and consider different actors’ roles in either maintaining or challenging gender data gaps- in particular, the role of civil society in mobilizing for change and ways of working with government ministries to build collaborative change efforts.

  • Likewise, make use of the diverse tools available and needed for enacting social change towards feminist AI, considering in particular the dynamics between internal and external coercive strategies, which are critical for addressing tech’s high authority barrier and embeddedness in our daily lives.

  • Conduct research on ‘bright spots’– examples where feminist actors have had success in bridging gender data gaps, achieving the use of gender data, and building more feminist AI.

  • Conduct cross-disciplinary collaborative research that brings together diverse perspectives and strengths not often in conversation, such as data scientists and tech developers, social scientists with gender theory expertise, feminist advocacy organizations and grassroots community organizers.

  • Continue pushing the boundaries of current discussions regarding what is feminist AI, and identify and mobilize around a feminist AI policy platform that includes informal practices and formal rules, as well as the different levels of change needed at both individual and systemic levels.

In order to continue advancing the feminist AI research agenda from theory and identification to practice and impact, below we highlight critical research gaps in the present feminist AI literature:

  • Who is currently mobilizing around feminist AI, and who is missing? What does it mean (and what is needed) to strengthen social movements around feminist AI?

  • How are grassroots feminist organizations in different geographic contexts discussing and/or mobilizing around feminist AI? How are gender data advocates engaging with just AI advocates, and vice versa?

  • What is the lifecycle of different feminist AI initiatives, and what are the particular barriers to feminist AI uptake?

  • Are the barriers for social change identified in this paper (lack of clear policy asks, high-authority barriers, and tech's embeddedness in our lives) relevant in different geographic and industry-specific contexts?

Our world is characterized by norms which are non-inclusive and held in place by power relations. They frequently prescribe what is possible for groups and individuals. They influence what is valued as expertise, knowledge and intelligence. They can reinforce gender and racialized stereotypes, which can lead to both overt and covert gender and racialized discrimination. Without specific and systematic efforts to change, science and tech organizations will reinforce and extend the broader norms that entrench gender and racial discrimination and exclusion. Internal organizational structures often replicate the unequal gender, racial and power relations overrepresented within and across systems. Similarly, the AI work developed by these organizations and firms can reinforce the wider socio-technical exclusion or repression of women’s knowledge and reify a gendered and racialized conceptualisation of ‘intelligence.’

But norms can be changed. By reaching a “tipping point,” new norms become more widespread.


Azcona, G., and Duerte Valero, S. (2018). Making women and girls visible: Gender data gaps and why they matter. UN Women.

Bennis, W. G., Benne, K. D., & Chin, R. E. (1961). The planning of change: Readings in the applied behavioral sciences.

Cookson, T.P., Fuentes, L., Zulver, J., & Langwworthy, M. (2020). Building Alliances for Gender Equality: How the tech community can strengthen the gender data ecosystem. Ladysmith.

D’Ignazio and Klein (2020). Data Feminism. MIT Press.

Dash, A. (2016) "How do we reform tech?" Human Tech.

Data2x (2020). Mapping gender data gaps: An SDG era update

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.

Fergus, J. (2020). These feminist chatbots were designed to combat online abuse. Input.

Fuentes, L. (2020). “The Garbage of Society”: Disposable Women and the Socio‐Spatial Scripts of Femicide in Guatemala. Antipode, 52(6), 1667-1687.

Fuentes, L., & Cookson, T. P. (2020). Counting gender (in) equality? a feminist geographical critique of the ‘gender data revolution’. Gender, Place & Culture, 27(6), 881-902.

and Cookson 2020

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. nyu Press.

Onuoha, M & Nucera, D (2018). A People’s Guide to Artificial Intelligence. Allied Media Projects.

Peters, Jay (2020). IBM will no longer offer, develop, or research facial recognition technology. The Verge.

Rao, A., Sandler, J., Kelleher, D., & Miller, C. (2015). Gender at work: Theory and practice for 21st century organizations. Routledge.

Smith, M. and Nepeune, S. (2018). Artificial intelligence and Human Development: Toward a research agenda, IDRC.

Swords, Jose (2017). Designing Feminist Chatbots.

UN Women (2020). Families in a Changing World: Progress of the World's Women 2019-2020.

UN Women (2018). Turning promises into action: Gender equality in the 2030 Agenda for Sustainable Development

Wakabayashi, D; Griffith, E; Tsang, A; and Conger, K (1 November 2018). Google Walkout: Employees Stage Protest Over Handling of Sexual Harassment. New York Times.

No comments here
Why not start the discussion?