Skip to main content
SearchLoginLogin or Signup

CHAPTER 3: Debiasing the Algorithm

This chapter explores the ethics, definitions, and metrics of conversations on fairness, and explores how even when models take these into account, how they are deployed and used in the real world matters just as much.

Published onSep 14, 2021
CHAPTER 3: Debiasing the Algorithm

There is often a perception that artificial intelligence (AI) systems are mathematical -- composed of logic and machinery -- and hence impartial and separate from questions of morality. However, there is not one set of things that is ‘AI’ and another set of things that is ‘human’. The two are fundamentally intertwined because of the nature of how we build and use AI systems.

Firstly, we must acknowledge that the data used in many AI systems captures simply a snapshot of a fraction of human behavior. And as pointed out [Intro to “Raw Data” is an Oxymoron], the person holding the camera gets to decide which snapshot to take and of whom. Which data gets collected and used, how it is cleaned and processed, what we decide to optimize, and which metrics we use to measure success -- each of this is a human decision informed equally by our computational knowledge and our human intuition.

Further, we need to acknowledge AI’s impact on society. Currently, individuals, governments, and organizations all consume AI products in ways that fundamentally affect the information to which we have access, and therefore directly influence the opportunities we have and the decisions we make. A system that primarily provides high-paying employment options to male job-seekers (Datta et al., 2018), or suggests potential home buying options primarily to white home-seekers (Sisson, 2019), is one that perpetuates and exacerbates the discrimination already present in society.

Hence, we end up being inadvertently trapped in a feedback loop we didn’t even know we created. It is not just that biased data produces biased AI -- the direct effect on people and social structures loop back affecting the next round of data collected, producing a negative feedback cycle (Noble, 2018). While on the one hand alarming, this feedback loop is also why there is a lot of promise in the work of debiasing algorithms -- if we can strategically intervene algorithmically, we have a powerful tool to help break the cycle of discrimination.

The Research Landscape of Debiasing AI

We are in a very exciting time research-wise with regard to understanding discriminatory AI and finding algorithmic methods to mitigate and eliminate these biases. If we have a specific problem framed in the context of computer science, we have a wide variety of tools that can help us step in and enact control in the output in order to meet a variety of fairness criteria and hence reduce the discrimination propagated.

The computer science community only started to get involved in this area less than 10 years ago. At first, a big bottleneck from an algorithmic perspective was that there are many ways to conceptualize fairness (Narayanan, 2018). And it was unclear, which -- if any -- was the right one to study. Looking outside of our field, it became clear that there is no right definition, the appropriateness of any metric of fairness depends on the context, values and preferences of the stakeholders (Selbst et al., 2019; Verma & Rubin, 2018). From an algorithmic perspective, this is very troubling as debiased algorithm design would seemingly have to solve a new problem from scratch each time a new context appears. While this could be done, it would be incredibly inefficient.

In order to address this, alongside fantastic colleagues and students, we have recently been able to make breakthrough progress by thinking of fairness metrics as belonging to a family of definitions (Celis et al., 2019). This allows us to design algorithmic frameworks that function with any definition coming from that family. As algorithm designers, we no longer have to come up with what we think might be a good fairness definition, or reinvent the wheel at every instance. We can instead design meta-algorithms for a particular use case, such as advertising, or ranking, or types of voting for this family of fairness metrics (Celis et al., 2018; Zafar et al., 2019; Celis et al., 2019). Then, when someone wants to apply a debiased algorithm to a particular setting, they can do the (admittedly still difficult) task of developing the right fairness definition, and simply give it to the algorithmic framework.

We can illustrate this with a ranking problem (Celis et al., 2018; Celis et al., 2020) . Say you are hiring and have an algorithm that's going to shortlist your candidates, and give you a list of the top 10 candidates to interview. That is a ranking problem. If you're hiring at a large company, you may have different resumes from previous employees and their performance and other metrics. That is the data we will work with. We would then also ask you to define what you mean by a fair or debiased outcome.

For example, you may say that the shortlist must contain equal numbers of men and non-men. Perhaps that's a reasonable thing to do for your setting, or perhaps you prefer the shortlist to reflect the demographics of the applicants across gender and race. Either way, once you quantify both which protected attributes you want to account for (e.g., race or gender) and which fairness metric you want to use (e.g., equal representation, or similar error rates), the debiasing framework can take that as input to provide a custom solution.


Polarization Online Ads Multi-winner Voting
(Celis et al., 2019) (Celis et al., 2019) (Celis et al., 2018)


Classification Rebalancing Ranking Sampling

(Celis et al., 2019) (Celis et al., 2020) (Celis et al., 2018; (Celis et al., 2017;
Celis et al., 2020) Celis et al., 2018)

We're certainly not done, but this framework perspective is really crucial because we no longer have to reinvent the wheel. The excuse “we wish we could implement something that doesn't discriminate, but we don't have the resources to develop it” is no longer valid -- you no longer have to start from scratch. Along with my colleagues, we now have a suite of such frameworks for solving many of the big problems in AI one of these algorithms (Controlling Bias in Artificial Intelligence, 2021), and there are several other toolkits available for practitioner to use (Bellamy et al., 2018; Bird et al., 2020).

Big Challenges Ahead

While there are many reasons to be optimistic, there also remain many big challenges ahead for this field. These deep questions can be roughly categorized as follows -- the technical (which lie primarily in the domain of computer science), the human (which lie primarily in the domain of psychology and sociology) and the civil (which lie primarily in the legal and policy domain). Here, I outline some of the big questions to consider and collaboratively work to address.

The technological system

Richness of Fairness Concepts

While once the protected attributes and fairness metrics have been determined we can apply algorithmic debiasing techniques, many big questions remain. E.g., “What do I mean by an algorithm that doesn't discriminate?”, “What do I hope to accomplish?”, and “Who am I trying to serve?”. In particular, we currently run the risk of defining debiasing too narrowly. This question is part philosophical but also technical -- e.g., understanding that discrimination happens intersectionally, which algorithmic techniques can be effiently extended to intersectional measures of fairness?

A significant bottleneck is that -- provably -- in most situations one cannot satisfy more than one fairness definition simultaneously (Chouldechova, 2017; Pleiss et al., 2017). Hence, we need to develop a better theory for this multi-objective fairness problem where we likely would have to refocus our goal to find solutions that we are not “too” unfair for any metric (Reich & Vijaykumar, 2021); this is still a nascent area but an important one to develop.

Protected Attributes

A big question that we have only recently started to address is “What if we don't know the protected attributes in the data”? For example, in the US context, it is illegal to collect demographic information for use as part of a hiring process. This makes sense in that one wants to be protective of groups of people that may face discrimination, but it also limits the direct interventions that you can apply if you want to reduce implicit biases (Taslitz, 2007; Bonilla-Silva, 2013). The core fallacy to a blindness approach is that even if the category is not captured in the data explicitly, it is implicitly encoded in the other features as one cannot separate a person’s demographics from their lived experience (Apfelbaum et al., 2010). In very recent lines of work we have started to address this question with new algorithmic debiasing techniques that can work implicitly on the data without protected attributes (Celis & Keswani, 2020; Keswani & Celis, 2021), and/or can work even if the protected attributes have errors (Mehrotra & Celis, 2021; Celis et al., 2021).

Composition Errors

One common misconception is that there is one single “AI” that is controlling the decisions -- in reality, it is often a patchwork of algorithms and components that together build an entire system with many decision points along the way. In essence, a whole bunch of different algorithms that are working together. Where there may be one part where bias or discrimination most obviously creeps in, the other components cannot be ignored as they play a role in setting up the rest for success or failure.

One example where this certainly happens is web search (Google, 2020). When you start to search for something, Google doesn't go and search the entire internet in that moment -- that would just not be fast enough. A lot of preprocessing to the web has already occurred, further a “quick and dirty” relevance sorting is likely performed so that it can work with a much smaller subset of potential results. It can then explore this much smaller subset in more detail and produce a ranking of the results. When we think about fairness, often we consider this last step of the process -- “How do I make sure that my ranking is both relevant and fair?” While this is of course a crucial component of the problem, if I only gave the ranking algorithm a very biased sample to work with, the ranking would be functioning with one hand tied behind its back.

Consider hiring, a place in which algorithmic decision making has been repeatedly shown to be discriminatory (Ajunwa, 2019; Hamilton, 2018). If in the pre-screening process I removed many female candidates, by the time I get to ranking my shortlist I simply do not have enough to work with. The problem is twofold -- I may simply not have enough women to consider, and the ones that I do have may not be as directly relevant to my task. The first part of the process is as important to address as the final ranking if we want to produce a quality diverse pool of candidates. In fact, if you just correct one component without considering the entire system, you may end up reducing the fairness in your system (Dwork & Ilvento, 2019).

Additional Ethical Considerations

Ensuring AI systems are not discriminatory is very important. However, it is not the only ethical consideration we must take when designing technology.

One important consideration is interpretability -- the ability to communicate what and how an AI system operates to the public so that an average citizen can understand and trust its predictions (Rudin, 2019). This is a step even beyond traditional interpretations of explainability where the goal is that an AI expert can understand the effect of the system (Ribeiro et al., 2016).

Another consideration is privacy. A big technological advancement was with the introduction of differential privacy (Dwork, 2008) -- rich algorithms with strong privacy protections are now available and in use by big companies including Apple and Google (Cormode et al., 2018). However, there is very little work or exploration around the combined (and somewhat orthogonal) goals of privacy and fairness (Suriyakumar et al., 2021).

Once we start looking at a range of ethical considerations, how do these interplay with our current techniques, and where do those technical gaps remain that allow us to leverage the big successes in each separate area.

The human system

Everything I have talked about so far focuses on the technological components of debiasing. This is a crucial piece, but it is just one part of the broader system in which AI tools are used. As such, we must think of how this piece fits in the broader context of a technological system as a whole. How does the technology part interact with people who are using it? We have to consider both the individuals that use the technology directly, or will be affected by it indirectly.

Stakeholder’s viewpoint

One important question to consider is what is a layperson’s expectation for fairness in an AI system? If you actually talk to people, what do they want? What do they think is fair? And how does this vary depending on their socioeconomic status, interest and knowledge about technology, and potential to be impacted directly? It's one thing for researchers and ethicists to conceptualize which different notions of fairness “make sense”, and another to directly ask those that would be affected. It is important to think about what people actually want. And perhaps most importantly we need to stop and understand the direct stakeholders that are affected, especially those coming from vulnerable populations whose voices are rarely heard. This often means re-evaluating any ``solution’’ locally in the specific context in which it is to be applied, because every situation is going to be different. By centering those most affected, and making sure the communities are involved in the design process, we can step out of our collective hubris as technologists and begin to address the core of the problems.

Humans “in the loop”

It is also crucial to realize that AI systems often do not take a final decision. One of the algorithmic areas in which there is deep social impact is related to parole reviewing or recidivism prediction. Automated systems are now being used across the US to give inmates or arrested individuals a ``risk score’’. This AI feedback is given to a judge, and the judge takes the decision. Recent work begins to address the question of, how does the prediction of the AI system and the decision of the judge interact together (Green & Chen, 2019)? This is similar to the composition question referenced above, except that now the composition is between AI and human components. Knowing that both AI systems and judges can be biased, does the combined system improve or worsen outcomes? There are some cases where -- depending on how we present the information -- when we compose we end up with something even more biased (Dwork & Ilvento, 2019). The entire pipeline has to be very carefully designed and audited to ensure that we are actually moving the arrow in a positive direction.

The societal system

The third arena where big challenges remain is when we consider the social system as a whole. In this, AI is in many ways a very small component.

Regulation

What are our values as a society? And which policies and regulations do we need in order to live up to them? What can we regulate, and in which ways can and should we regulate it? How can we get everyone from the big tech companies to the local municipalities to use AI in a responsible manner? We need to work together across disciplines in order to find composite solutions that make sense for what we have the tech for, or reimagine what we can design in light of what can be regulated It's not just about getting the technology right, it's about getting it right in the context in which it is deployed. A key difficulty is just learning to talk to one another. We need policy makers to understand the technology and technologists to understand policy implications. In terms of regulation (Crawford et al., 2021; Bonatti et al., 2021). For example, a mandate for the collection of certain kinds of information. So even if you want to disallow the use of protected characteristics in an AI prediction, we may want to collect that data in order to perform an audit that assesses impact across demographics. Mandating this is a very concrete proposal, but even reaching the conclusion that we need such mandates needs cross-disciplinary input.

Ongoing Audits

Algorithmic systems as well as social systems are dynamic. And therefore we can’t think of this as being one system that gets finished and then deployed. There has to be an active auditing process because the world is not static. Further, once we understand that policies and technologies affect individual behavior and opportunities, it becomes clear that by deploying a technology, we change the data landscape. Hence, we are not operating in a static system where you can “learn once” -- the process is dynamic and learning has to be constantly updated and assessed. Towards this we need third party trusted regulators -- not unlike those that exist for banks -- that can conduct regular and ongoing audits of big technology firms or of technology that gets used in the public sector.

Refusing to engage

Another question that as technologists we have to ask ourselves is, at what point do we refuse to build a system, as opposed to constantly trying to improve or build a fair one from the biased one we have? There is a tension between doing our best to improve a system that will never be completely unbiased vs refusing to engage on principle because even the best possible is not good enough. This is, in short, an abolition question: at which point, and for which settings do we have to take a hard look and say we should be arguing here as technologists that this is not a place for AI at all? As a whole, we have a lot of lessons that we can learn from previous successes or failures in social justice movements.


Conclusion

AI is a powerful tool with impacts that reach across our world. The changes it has brought for the better are immense, but so are the challenges that remain with regard to ensuring unbiased access and fair outcomes. I have been heartened by the recent interdisciplinary activity in addressing these questions, and challenge others to join this effort as we work towards a better world for all.

Bibliography

Ajunwa, I. (2019). The paradox of automation as anti-bias intervention. Cardozo L. Rev., 41.

Apfelbaum, E. P., Pauker, K., Sommers, S. R., & Ambady, N. (2010). In blind pursuit of racial equality? Psychological science, 21.

Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., & Lohia et al., P. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias.

Bird, S., Dudik, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., & Walker, K. (2020). Fairlearn: A toolkit for assessing and improving fairness in AI. Microsoft, Tech. Rep.

Bonatti, A., Celis, L. E., Crawford, G. S., Dinielli, D., Heidhues, P., Luca, M., Salz, T., Schnitzer, M., Scott Morton, F. M., Seim, K., Sinkinson, M., & Zhou, J. (2021). More Competitive Search Through Regulation. The Digital Regulation Project, Tobin Center for Economic Policy.

Bonilla-Silva, E. (2013). Racism without Racists: Color-Blind Racism and the Persis-tence of Racial Inequality in the United States. Rowman & Littlefield.

Celis, L. E., Deshpande, A., Kathuria, T., Straszak, D., & Vishnoi, N. K. (2017). On the complexity of constrained determinantal point processes. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques.

Celis, L. E., Huang, L., Keswani, V., & Vishnoi, N. K. (2019). Classification with fairness constraints: A meta-algorithm with provable guarantees. Proceedings of the conference on fairness, accountability, and transparency.

Celis, L. E., Huang, L., Keswani, V., & Vishnoi, N. K. (2021). Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees.

Celis, L. E., Huang, L., & Vishnoi, N. K. (2018). Multiwinner voting with fairness constraints. Proceedings of the 27th International Joint Conference on Artificial Intelligence.

Celis, L. E., Kapoor, S., Salehi, F., & Vishnoi, N. K. (2019). Controlling polarization in personalization: An algorithmic framework. Proceedings of the conference on fairness, accountability, and transparency.

Celis, L. E., & Keswani, V. (2020). Implicit Diversity in Image Summarization. Proceedings of the ACM on Human-Computer Interaction 4.CSCW2.

Celis, L. E., Keswani, V., Straszak, D., Deshpande, A., Kathuria, T., & Vishnoi, N. K. (2018). Fair and diverse DPP-based data summarization. International Conference on Machine Learning.

Celis, L. E., Keswani, V., & Vishnoi, N. K. (2020). Data preprocessing to mitigate bias: A maximum entropy based approach. International Conference on Machine Learning.

Celis, L. E., Mehrotra, A., & Vishnoi, N. K. (2019). Toward controlling discrimination in online ad auctions. International Conference on Machine Learning.

Celis, L. E., Mehrotra, A., & Vishnoi, N. K. (2020). Interventions for ranking in the presence of implicit bias. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.

Celis, L. E., Straszak, D., & Vishnoi, N. K. (2018). Ranking with Fairness Constraints. International Colloquium on Automata, Languages and Programming.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2).

Controlling Bias in Artificial Intelligence. (2021). https://controlling-bias.github.io/

Cormode, G., Jha, S., Kulkarni, T., Li, N., Srivastava, D., & Wang, T. (2018). Privacy at Scale: Local Differential Privacy in Practice. Proceedings of the 2018 International Conference on Management of Data.

Crawford, G. S., Crémer, J., Dinielli, D., Fletcher, A., Heidhues, P., Luca, M., Salz, T., Schnitzer, M., Scott Morton, F. M., Seim, K., & Sinkinson, M. (2021). Consumer Protection for Online Markets and Large Digital Platforms.

Datta, A., Datta, A., Makagon, J., Mulligan, D. K., & Tschantz, M. C. (2018). Discrimination in online advertising: A multidisciplinary inquiry. Conference on Fairness, Accountability and Transparency.

Dwork, C. (2008). Differential privacy: A survey of results. Springer, Berlin, Heidelberg.

Dwork, C., & Ilvento, C. (2019). Fairness Under Composition. Innovations in Theoretical Computer Science Conference.

Google. (2020). How Search organizes information. https://www.google.com/search/howsearchworks/crawling-indexing/

Green, B., & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. Proceedings of the Conference on Fairness, Accountability, and Transparency.

Hamilton, I. A. (2018). Amazon built an AI tool to hire people but had to shut it down because it was discriminating against women. https://www.businessinsider.com/amazon-built-ai-to-hire-people-discriminated-against-women-2018-10

Keswani, V., & Celis, L. E. (2021). Dialect Diversity in Text Summarization on Twitter. Proceedings of the Web Conference.

Mehrotra, A., & Celis, L. E. (2021). Mitigating Bias in Set Selection with Noisy Protected Attributes. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

Narayanan, A. (2018). Translation tutorial: 21 fairness definitions and their politics. Proc. Conf. Fairness Accountability Transp., New York, USA, 2(3). https://www.youtube.com/watch?v=jIXIuYdnyyk

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K. Q. (2017). On Fairness and Calibration. Advances in Neural Information Processing Systems, 30.

Reich, C. L., & Vijaykumar, S. (2021). A Possibility in Algorithmic Fairness: Can Calibration and Equal Error Rates be Reconciled? Symposium on Foundations of Responsible Computing.

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should i trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining.

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5).

Selbst, A. D., boyd, d., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the conference on fairness, accountability, and transparency.

Sisson, P. (2019). https://archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination. Curbed.

Suriyakumar, V. M., Papernot, N., Goldenberg, A., & Ghassemi, M. (2021). Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

Taslitz, A. (2007). Racial Blindsight: The Absurdity of Color-Blind Criminal Justice. Ohio State Journal of Criminal Law.

Verma, S., & Rubin, J. (2018). Fairness definitions explained. IEEE/ACM International Workshop on Software Fairness (Fairware).

Zafar, M. B., Valera, I., Gomez-Rodriguez, M., & Gummadi, K. P. (2019). Fairness Constraints:A Flexible Approach for Fair Classification. Journal of Machine Learning Research, 20.

Comments
0
comment
No comments here
Why not start the discussion?