Meredith Broussard: How Computer Programs Can Make Unfair Decisions That Hurt People
- Sarah Parker
- May 1, 2023
- 5 min read
Updated: May 2, 2023
Book: More Than a Glitch
Website: This site is a great launch pad to learn more about her academic research focusing on artificial intelligence in investigative reporting and ethical AI.
Quote: "But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself?"

Have you ever heard of artificial intelligence or AI? It's when computers and machines can do things that normally need human intelligence, like recognize faces or drive cars. Have you ever heard of computer programs that make decisions for us? These programs are called "algorithms," and they are used to help us do things like find information on the internet, watch videos online, or even decide who gets hired for a job. But did you know that sometimes these AI systems and algorithms can be unfair to certain people? That's what Meredith Broussard, a data scientist and journalist, talks about in her book "More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech".
Broussard says that sometimes AI systems can be biased because they were trained on data that doesn't represent everyone equally. But Broussard doesn't think we should just try to make these AI systems more inclusive. Instead, she says we need to find the biased algorithms and change them. This goes a few layers deeper than the user interfaces or questions we answer to interact with technology throughout day. She has some ideas about how to do this, like using investigative journalism to look for unfair AI systems, and making sure developers think about accessibility for everyone when they're creating these systems.
Broussard's book shows us that we need to be careful when we use AI systems, and that we need to make sure they're not being unfair to certain groups of people. By working to change these systems, we can make the world a more fair and equitable place for everyone.
Facial Recognition
Facial recognition technology is a tool that uses algorithms to identify individuals based on their unique facial features. While it may seem like a convenient and efficient way to enhance security or streamline identification processes, it has been found to be problematic in several ways. One major issue is the fact that the technology has been shown to have significant racial and gender biases, leading to inaccurate and discriminatory results. For example, studies have shown that facial recognition algorithms trained on predominantly white datasets struggle to accurately identify individuals with darker skin tones, leading to higher rates of false positives and false negatives for people of color. Additionally, facial recognition technology has been used by law enforcement agencies to target specific communities, leading to concerns about civil liberties and privacy.
Another problem with facial recognition technology is the lack of transparency and accountability surrounding its use. Most facial recognition systems are proprietary, meaning that their algorithms and data are not made public. This makes it difficult to identify and correct issues with bias or errors, and can also lead to misuse or abuse of the technology. For example, in 2019, it was discovered that the US Immigration and Customs Enforcement agency had used facial recognition technology to scan millions of driver’s license photos without the knowledge or consent of the individuals involved. This type of unchecked use of facial recognition technology raises serious questions about its potential impact on civil rights and individual privacy.
Ability Bias
Ability bias in technology refers to the discrimination against individuals with disabilities, often by designing products or services that exclude or disadvantage them. One example of problematic ability bias in technology is the use of inaccessible websites or apps. For instance, if a website is not designed with accessibility in mind, it may have features that make it difficult or impossible for people with disabilities to use, such as poor color contrast or no support for assistive technologies like screen readers. This can create barriers for people with disabilities to access information, products, and services, limiting their opportunities and independence.
This is also seen in the development of artificial intelligence (AI) systems that reinforce stereotypes and biases against people with disabilities. For instance, AI systems trained on biased data can lead to inaccurate or discriminatory outcomes, such as misdiagnosis or denial of services. Furthermore, some AI technologies may perpetuate stereotypes about people with disabilities, such as assumptions that they are less competent or less productive than their non-disabled peers, further marginalizing them. As technology becomes more integrated into our daily lives, it is important to consider the impact of ability bias on those with disabilities and work towards creating more inclusive and accessible technologies.
Gender Rights
Gender rights in technology is the persistent gender gap in the tech industry, particularly in leadership and technical roles. Despite efforts to increase diversity and inclusion in the tech industry, women and other underrepresented groups still face significant barriers to entry and advancement. Studies have shown that women are less likely to be hired for technical roles, and when they are, they are often paid less and given fewer opportunities for promotion. Additionally, women are more likely to experience harassment and discrimination in the workplace, which can negatively impact their career trajectories and mental health. These disparities can have serious consequences not only for individual women but also for the tech industry as a whole, as diverse perspectives are essential for innovation and problem-solving.
This can also be seen in the AI algorithms themselves which are trained on large datasets, and can reflect and reinforce societal biases and stereotypes. For example, the facial recognition algorithms mentioned earlier have been found to be less accurate at identifying women and especially women of color due to biases in the training data. Similarly, natural language processing (NLP) models have been found to exhibit gender biases in language, such as associating certain words with specific genders. A great example of this is how nurses are referred to as women and doctors are referred to as male. These biases can have serious consequences, such as reinforcing harmful stereotypes and making it more difficult for women and other marginalized groups to access opportunities and resources. Addressing these biases and promoting gender equity in technology requires ongoing efforts to increase awareness, diversity, and inclusion at all levels of the industry.
The Good News
We can make sure that algorithms are fair and don't hurt people. We can do this by designing algorithms with accessibility in mind, which means making sure that everyone can use them and benefit from them. We can also make sure that people from different backgrounds are involved in creating algorithms, so that they are fair and unbiased.
First, we can design algorithms with accessibility in mind. This means making sure that everyone can use them and benefit from them, no matter where they come from or what they look like. We can do this by making sure that algorithms are tested on a diverse group of people and that they work well for everyone. This can help prevent unfair bias from creeping into the algorithms.
Second, we can make sure that people from different backgrounds are involved in creating algorithms. This means having people with different experiences and perspectives working on algorithms, so that they are fair and unbiased. We can do this by encouraging more people from diverse backgrounds to learn about and work in the tech industry.
Third, we can hold companies accountable for creating fair and unbiased algorithms. This means making sure that companies are transparent about how they use algorithms and how they make decisions. It also means holding them accountable when their algorithms cause harm or are unfair.
By working together, we can make sure that algorithms are fair and help make the world a better place for all people!