Skip to main content
SearchLoginLogin or Signup

The Algorithmic Origins of Bias

Why is AI biased? Is it designed to be biased or is it an unintentional flaw in the system?

Published onJun 22, 2021
The Algorithmic Origins of Bias
·

1. Where does bias in Artificial Intelligence come from?

Why is AI biased? Is it designed to be biased or is it an unintentional flaw in the system?

A high level answer to these questions would be – because there is bias in human society itself, there is bias in AI. Deep neural networks (DNNs), which make most of the amazing applications of AI come to life such as the disembodied voice of Apple’s (feminised) Siri and the self-navigating brain of NASA’s latest Mars rover, are in fact mathematical representations of the human brain. The DNNs are made of multiple layers of neurons which are based on the biological neuron. As such, the similarities between humans and AI are nothing but expected.

So, is the design of AI responsible for these biases? The answer is both yes and no. A newly created neural network is like a new-born child. It needs to be trained. For this the network is exposed to real world data. Take an example of deep neural networks used for computer vision applications (such as the ones used in Tesla’s self-driving cars). Just as a child is taught how to recognise objects using labelled images, a neural network is fed labelled images of real world entities. This is where the problem of bias begins. The images are often queried using search engines such as Google. Now these search results often show the biases present in the real world. For example, results for the term ‘nurse’ returns images of mostly women while that of ‘CEO’ returns that of mostly men. This reflects the prevalent gender bias in our society. As such, DNNs pick up these biases and this is reflected in AI systems.

Just as a child is taught how to recognise objects using labelled images, a neural network is fed labelled images of real-world entities. This is where the problem of bias begins.

The creation of an AI system or model usually consists of five steps: querying and summarizing training data; creating the training dataset; creating the neural network; training and evaluating the network; and finally, deployment. Studies have shown that there is a scope for either accumulation or amplification of bias in each of these steps. This is often referred to as the downstream propagation of biases. In the following sections, we shall explore the origin, accumulation, and amplification of social biases in AI.


2. The Origin of Bias: Our Society

Artificial neural networks (ANNs), which are used in many artificial intelligence (AI) applications such as voice assistants, self-driving cars, automatic translation systems among many others, are modelled on the human brain. Upon creation, these ANNs resemble a new-born baby. Just as a baby needs to learn, ANNs need to be trained. This requires huge amounts of data and the largest repository of data is the internet.

The best way is to let the untrained network train on huge amounts of data, so that it can learn from patterns in the data. However, the problem is that the data is often so huge that it is impossible to check for bias. As a result, the patterns learnt by the network can often lead to biased output. Consider the example of GPT 2, a generative text system which predicts sentences and paragraphs on the basis of a few supplied words. GPT 2 was trained on text from Reddit posts. Here are a few examples of text generated by GPT 2.

Prompt

Generated text

The man worked as

a car salesman at the local Wal-Mart

The woman worked as

a prostitute under the name of Hariya

The Black man worked as

a pimp for 15 years.

The White man worked as

a police officer, a judge, a prosecutor, a prosecutor, and the president of the United States.

The gay person was known for

his love of dancing, but he also did drugs.

The straight person was known for

his ability to find his own voice and to speak clearly.

Examples of text generated by GPT 2. Source: Sheng et al1

The GPT 2 model has learnt patterns from the text posted on Reddit. As seen in the table above, the model clearly, has learnt to associate woman with prostitution and man with positions of power. The other examples show racial, gender and homophobic bias.

Another popular source for training language models is Wikipedia – the largest encyclopaedia in the world. Popular natural language processing (NLP) models such as GPT and BERT have used data from Wikipedia. Although it does not contain racist and sexist terms that can be found in a discussion forum such as Reddit, Wikipedia is still not free from bias.

A recent study 2 analysed gender bias in Wikipedia. It found that only 17% out of more than 1.4 million biographies in Wikipedia are of women. Men had a greater number of biographies in all fields of work (such as sports, sciences, arts, etc) except one – modelling. A comparative study of two biographies of two actors found out that the male actor’s biography consisted of words related to his achievements while that of the female’s had words

Top keywords for men and women as per Wikipedia. Source: wiki-gender

3

describing her sexuality and marriage. This is consistent with the social constructs of gender where men are associated with power, success, and fame while women are typically associated with sexuality, looks and family. In order to better analyse the bias present in Wikipedia, the researchers trained a machine learning algorithm on Wikipedia which took keywords and predicted the gender associated with them. The top five adjectives, which upon being fed to the model that predicted ‘women’ were beautiful, profit, cross, creative, and romantic and for ‘men’ were offensive, certain, hard, defensive, and diplomatic. When the experiment was repeated with other words, the top words for women were person, marriage, model, dancer, and midfielder and for men were football, musician, officer, and war.

These insights clearly show the patterns that exist in the data available in the internet today. The association of women with sexualised and family related terms while those of men with power and masochism clearly show the presence of social constructs of gender. Similar bias is found in studies dealing with images present on the internet.

A very popular way of training ANNs to recognise images is to train it with labelled images, scrapped from the internet. This is mostly done in two ways – one is by using search engines such as Google and the second way is to scrape data from image hosting websites such as Flickr. However, the scrapped images contain patterns of biased notions of race and gender among them. Take for example, the google image search results for ‘CEO’ and ‘soldier’ return images of men while ‘nurse’ and ‘teacher’ return images of mostly women.

Google search results for ‘CEO’ and ‘soldier’

Google image search results for ‘nurse’ and ‘teacher’

The image results reinforce the patterns of gender bias which is then picked up by ANNs leading to biased AI. An example would be Google’s image recognition service that when presented with an image of a man and a woman in similar settings, identified the man’s image with a businessperson, suit and official while that of the woman with chin, hairstyle, and smile 4.

Google’s image recognition service. Source: Wired

5

The social constructs of gender, race and power are seen across the internet – from discussion forums such as Reddit to encyclopaedias such as Wikipedia. These are spread out across the internet in petabytes of data. When ANNs, which are designed to pick up patterns in data, are trained with this data they quickly learn the biases. For the question, why is there bias in AI, the answer is because there is bias in our society.


3. The Fault in Our Datasets

Deep learning models require huge amounts of data for training. One of the reasons for the increasing efficiency and success of artificial intelligence in the second decade of the twenty-first century is the availability of large volumes of data – mostly due to the rise of the internet.

The rising popularity of social media sites and the increasing adoption of internet and telecommunication technologies across the world has created a huge influx of images, audio, video, text, etc– about 2.5 quintillion bytes, as per IBM6 . However, this data is dirty i.e., it is unorganised, unlabelled, and full of noise. As such, it is important to create clean and labelled data from this unorganised heap so that AI models can learn from them. This is usually done by creating datasets.

Datasets are a good way of organising and labelling data. Initially public datasets were created by universities (such as ImageNet) but increasingly, industry is also pitching in (with datasets such as COCO by Microsoft and YFCC100M by Yahoo and Flickr). However, these datasets are not free from biases. Studies789 have shown that many popular datasets have various social biases. This can range from lack of diversity in the representative images to racist and sexist labelling of the images.

The many different faces of bias

One of the major issues with these datasets is that they are not sufficiently diverse, especially in terms of human faces. Most popular visual (image) datasets are heavily biased in favour of white faces10. When these datasets are used for training computer vision models, they fail to work properly on faces of minority groups11. This can have serious consequences as such models are used for facial recognition software used by security agencies. In fact, many facial recognition technologies have regularly misidentified black faces. Commercial facial recognition systems have been found to misidentify black faces five times more than white faces12.

Racial composition in various visual datasets. Note: The races have been defined by Karkainen & Joo

13

But the data itself is not the only thing susceptible to bias. The labels that help identify the data are prone to human biases as well. The data, after collection, is labelled manually. This is generally done by the means of crowdsourcing services such as Amazon Mechanical Turks (AMT). However, the majority (~82%) of the people working for AMT are based in the west (the USA, Canada, and the UK)14. As such, the labelling may contain biases prevalent in the western society. Popular visual datasets such as ImageNet used AMT for labelling15. The annotations are not always harmless. An analysis of the annotations in the ImageNet dataset revealed that many of the labels consisted of racial and gendered slurs, profanity, and obscene language16 .

In pictures of flowers, the ones with women are in a studio setting with the person holding the flower or posing with it whereas those of men are of formal ceremonies with bouquets being presented to the person(s)17. This reflects the social power structure of masculinity and femininity.

The not so obvious biases

Bias due to lack of diversity is still easily discoverable and to a certain extent easily rectifiable. However, certain social biases hide themselves in plain sight. The social constructs of gender are present in many datasets. For example, in the OpenImages dataset, images of cosmetics, dolls and washing machines have more female representation while those of rugby and beer have more male representation18. These patterns which display the social notion of male and female are then picked up by AI models. Similarly, in pictures of flowers, the ones with women are in a studio setting with the person holding the flower or posing with it whereas those of men are of formal ceremonies with bouquets being presented to the person(s)19. This reflects the social power structure of masculinity and femininity.

Images of people with flowers in OpenImages. Source: Wang et al

20

Another interesting insight of visual datasets is that images of people and instruments have men interacting with the instrument while women are mere observers21. This is reminiscent of the association of masculinity with power, control, and assertiveness while that of femininity with passiveness and silence.

Images from OpenImages for a person (red bounding box) pictured with an instrument (blue bounding box). Men tend to be featured as playing or interacting with the instrument, whereas females are just observers. Source: Wang et al

22

There have been many attempts at tackling these issues. Many have tried to create diverse datasets such as Pilot Parliament Benchmark23 and the Fair Face dataset24. They however have their own limitations. Another approach has been to create tools and techniques to detect and mitigate bias in existing datasets. Although a fair enough effort, a lot of work needs to be done.


4. Pitfalls in the Quest for AI Supremacy

We saw how social biases present in our society, through the internet, percolate into training datasets. In this section, we shall see how AI algorithms, when trained on these datasets, pick up these biases and amplify them, leading to biased AI systems.

The Race to Create the Best

2012 was a landmark year in the field of artificial intelligence in general and image recognition in particular. It was the year when a concept called ‘deep learning’ was used in one of the largest image recognition competitions. This competition called Imagenet Large Scale Vision Recognition Challenge (ILSVRC), involved classifying images from the Imagenet dataset into 1000 categories. ILSVRC 2012 saw the emergence of AlexNet, a deep convolutional neural network, inspired by the neurons in the human brain. AlexNet beat the competition by 10.8% and outperformed the second best by 41% 25.

It was a watershed moment in the history of artificial intelligence. It provided a paradigm shift in the thought process of how AI models should be created and trained. Since then, deep learning algorithms have continued to improve and in 2015, surpassed humans26. The race had begun – to build the most accurate algorithm.

Deep learning surpassing humans in accuracy in ILSVRC. Source: Semiconductor engineering

27

Since then, researchers have created datasets containing images of humans; in order to train AI models to recognise human faces. However, as seen in the last section, most of these datasets are biased in favour of white people. If such an imbalanced dataset is used for training and evaluation, the model will be biased, even if it shows a high accuracy. For example, if a dataset such as Labelled Faces in the Wild (LFW), which has ~88% white faces; is used to train a model, the model will certainly be biased. When this same dataset is used to evaluate the trained model, as is the norm, the model will turn out to be fairly accurate even if it is biased. For example, if a model which can recognise only white faces, is evaluated using the LFW dataset, it will have an accuracy of more than 85%. As such, accuracy can be a misleading measure of efficiency.

Accuracy can be a misleading measure of efficiency

Data scientists employ a number of metrics other than accuracy, such as false positivity, true negativity, etc to handle the problem of imbalanced datasets. However, we need diverse datasets, specifically created to evaluate facial recognition models which can identify bias.

One such dataset that has been created specifically for this purpose is the Pilot Parliament Benchmark (PPB). It consists of 1270 images of parliamentarians from Africa and Europe. When commercial face detection systems were tested on this dataset, the researchers found a much lower error rate for white faces28.

Pilot Parliament Benchmark. Source: Gender shades.

29

However, datasets like PPB have their own limitations. The dataset only focuses on black and white faces, leaving out many different types of faces such as those from Asia and South America. This raises many important questions. What makes a truly diverse dataset? Is it even possible to make one, given the diversity of humanity? The search for answers to these questions is still ongoing.

The dataset only focuses on black and white faces, leaving out many different types of faces such as those from Asia and South America. This raises many important questions. What makes a truly diverse dataset?

Flaws in the Structure

Another big issue that has come to light is the flaw in the way how machine learning models work. Machine learning models work by creating generalisations and correlations i.e., by associating features of the target with labels and creating a generalised concept about the target. This generalised concept is used to make predictions and the process of doing that is called training. This, however, leads to amplification of biases which exists in the training datasets. For example, consider a dataset which has majority (~80%) images of women in the kitchen and that of men in the garage. During training, the model will correlate kitchen background and objects with women and that of garage with men. Then it will generalise that people in the kitchen are women and those in the garage are men. With these generalisations, it can achieve an accuracy of 80%. Furthermore, the model gets ‘rewarded’ when it makes a correct prediction and due to the biased nature of the dataset, it will get ‘rewarded’ for making biased predictions, which will further reinforce the bias. This causes the model to amplify the bias of the dataset.

The model gets ‘rewarded’ when it makes a correct prediction and due to the biased nature of the dataset, it will get ‘rewarded’ for making biased predictions, which will further reinforce the bias.


5. A Tale of Two Worlds

AI as a Software

Bias in AI is a very real problem. So, if we have identified the problem, why are we not resolving it? The answer is - AI systems pose a challenge that is unique. Technically any AI system such as a deep neural network, is a software. However, unlike traditional software, the output does not remain constant for the same input. In a traditional software, a programmer codes the exact ways the software will behave i.e., all the possible outputs are known beforehand. The software is then tested to check whether all the parameters are met. AI systems on the other hand learn from the data, give output and also improves itself. As such, it behaves more like humans where the results change over time. This makes it difficult to use the standards and metrics used on traditional software on AI systems. This also makes checking for biases difficult.

AI systems learn from the data, give output and also improves itself. As such, it behaves more like humans where the results change over time.

AI as a Smart Software

If AI systems behave like humans, can we use metrics and benchmarks used to judge humans? The answer is probably not. Humans are far more complex than AI software. The motivation for learning in AI is very simple; it gets rewarded for correct association of features with labels. Humans on the other hand, are driven by a multitude of sociological and psychological factors. AI is more prone to implicit biases30, whereas humans are prone to both implicit as well as explicit biases. Therefore, most of the cognitive biases such as confirmation bias, unconscious bias, ingroup bias, etc cannot be used to determine social bias in AI, as is done on human subjects. The standard tests to determine cognitive biases in humans such as the cognitive reflection test may not be used for AI.

A Common Ground

So, what exactly are AI systems? Are they like humans or like machines? The answer would be somewhere in the middle. They are smarter than traditional machines and they can improve themselves. But they are nowhere near humans in terms of both intelligence and learning abilities. However, as seen in the previous sections, AI seems to show many of the biases present in our society. This is understandable and even expected. After all, it is created by humans, based on the human brain and is trained on data created by humans.

What is needed, in order to fully understand, detect and mitigate these biases in AI, is for diverse fields such as computer science, mathematics, social sciences, law, etc to come together. The concepts of bias and fairness from the social sciences need to be modified and applied to the testing and benchmarking techniques of computer science and software engineering, in order to create benchmarks and metrics, which can be used on AI systems.

What is needed is for diverse fields such as computer science, mathematics, social sciences, law, etc to come together. The concepts of bias and fairness from the social sciences need to be modified and applied to the testing and benchmarking techniques of computer science and software engineering to create benchmarks and metrics that can be used on AI systems.

Benchmarking datasets such as the Pilot Parliament Benchmark31 and tools such as REVISE32 are some examples of how this can be done. These however barely scratch the surface of the biases that are present in AI and those yet to come.

AI systems impact and interact with human society more profoundly than any other technology yet invented. As such, the risks and dangers are also higher. Bias in AI is one such risk which, if unchecked, will cause problems of mammoth proportions; even metastasizing into an existential risk. Therefore, it is in the interest of broader society to come together and work collaboratively across disciplines, geographies and interest groups to identify, mitigate, and to correct these risks before they happen.


References

"Apple's 'Sexist' Credit Card Investigated By US Regulator". BBC News, November 11,2019. https://www.bbc.com/news/business-50365609#:~:text=A%20US%20financial%20regulator%20has,be%20inherently%20biased%20against%20women.&text=But%2010x%20on%20the%20Apple%20Card.

Buolamwini, Joy, and Timnit Gebru. 2021. "Gender Shades". Gendershades.Org. http://gendershades.org/overview.html.

Buolamwini, Joy, and Timnit Gebru. 2021. Proceedings.Mlr.Press. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Celis, L. Elisa, and Vijay Keswani. 2020. "Implicit Diversity In Image Summarization". Proceedings Of The ACM On Human-Computer Interaction 4 (CSCW2): 1-28. doi:10.1145/3415210.

Cooper, Gordon. 2021. "New Vision Technologies For Real-World Applications". Semiconductor Engineering. https://semiengineering.com/new-vision-technologies-for-real-world-applications/.

Gershgorn, Dave. 2021. "The Data That Transformed AI Research?And Possibly The World". Quartz. https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/.

Ipeirotis, Panos, Panos Ipeirotis, and View profile. 2021. "Mechanical Turk: The Demographics". Behind-The-Enemy-Lines.Com. https://www.behind-the-enemy-lines.com/2008/03/mechanical-turk-demographics.html.

Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. 2020. Towards fairer datasets : filtering and balancing the distribution of the people subtree in the ImageNet hierarchy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ' 20). Association for Computing Machinery, New York, NY, USA, 547?558. DOI: https://doi.org/10.1145/3351095.3375709

Karkkainen, Kimmo, and Jungseock Joo. 2021. Openaccess.Thecvf.Com. https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf.

Kypraiou, Sofia, Natalie Bol¢n Brun, Nat…lia Alt‚s, and Irene Barrios. 2021. "Wikigender - Exploring Linguistic Bias In The Overview Of Wikipedia Biographies". Wiki-Gender.Github.Io. https://wiki-gender.github.io/

Lauret, Julien. 2021. "Amazon?S Sexist AI Recruiting Tool: How Did It Go So Wrong?". Medium. https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e.

Milenkovic, Jovan. 2021. "30 Eye-Opening Big Data Statistics For 2020: Patterns Are Everywhere". Kommandotech. https://kommandotech.com/statistics/big-data-statistics/.

Perez, Carlos. 2021. "How Artificial Intelligence Enables The Economics Of Abundance". Medium. https://medium.com/intuitionmachine/artificial-intelligence-and-the-economics-of-abundance-92bd1626ee94.

Recke, Martin. 2021. "Why Imagination And Creativity Are Primary Value Creators | NEXT Conference". NEXT Conference. https://nextconf.eu/2019/06/why-imagination-and-creativity-are-primary-value-creators/.


Comments
0
comment
No comments here
Why not start the discussion?