top of page

Cathy O'Neil: Value Added Models - Creating their own Reality and Justifying their Results

  • Writer: Sarah Parker
    Sarah Parker
  • Apr 23, 2023
  • 5 min read

Website: This site is where Cathy shares her "Exploring and venting about quantitative issues"

Quote: "This type of model is self perpetuating, highly destructive - and very common."

American Mathematician Cathy O'Neil

As technology continues to permeate every aspect of our lives, algorithms are increasingly being used to automate tasks that were once the domain of humans. From hiring decisions to credit scoring to predictive policing, algorithms are being deployed to make decisions that affect millions of people every day.


But as Cathy O'Neil argues in her book "Weapons of Math Destruction," these algorithms can perpetuate racial biases and inequalities, even if they are unintentional. When algorithms are developed without sufficient testing or oversight, they can reflect the biases and assumptions of their creators, leading to unfair and discriminatory outcomes. They lack the human element of social context and are "overly simple, sacrificing accuracy and insight for efficiency". What might seem like an effective tool to solve a complex problem is further masking an underlying and root cause of the issues they are trying to solve for. To summarize, there are three elements to look for when attempting to identify weapons of math destruction - opacity, scale, and damage.


Algorithms have the potential to be powerful tools for good, but only if they are developed and deployed in a way that is transparent, accountable, and equitable. By raising awareness of the issues of algorithmic bias and promoting more oversight and regulation, we can work towards a future where algorithms are used to promote, rather than undermine, racial equity.

Hiring Decisions

Companies are increasingly using algorithms to help companies sort through a large number of job applications and identify the most promising candidates. However, these algorithms are not immune to biases and can perpetuate the same biases that exist in the job market.


For example, an algorithm might be programmed to look for certain keywords or experiences in a job application to determine whether a candidate is a good fit for a particular role. But if the algorithm is programmed to prioritize candidates who have attended Ivy League schools or have experience at prestigious companies, it may inadvertently discriminate against candidates from underprivileged backgrounds or who attended less prestigious schools. This can perpetuate a cycle of privilege and exclusion, where only a certain segment of the population is able to access high-paying jobs and career advancement.


Additionally, algorithms may be trained on biased data sets, which can further perpetuate biases. For example, if an algorithm is trained on historical data that reflects the biases of the past, such as discrimination against women or people of color, it may inadvertently perpetuate those biases in the future.


Credit Scoring

Algorithms are used to determine creditworthiness, which can affect things like getting a credit card, home loan, and even a job. However, these algorithms can penalize individuals from low-income areas, which are often communities of color. This makes it harder for people in these communities to access credit and build wealth, perpetuating economic inequalities.


One example of this is the use of zip codes in credit scoring. Many credit scoring algorithms use an individual's zip code as a factor in determining creditworthiness, under the assumption that people who live in certain zip codes are more likely to be a higher credit risk. However, this practice can disproportionately penalize individuals from low-income areas, which are often communities of color. This prevents poorer people from accessing resources and makes it harder for people in these communities to gain credit history and build wealth.


Moreover, algorithms used in credit scoring may be trained on biased data sets. For example, if an algorithm is trained on historical data that reflects discrimination against certain groups, such as women or people of color, it may inadvertently perpetuate those biases in the future. See a pattern yet? The scoring algorithm is hidden.


"These models, powered by algorithms, slam the doors in the face of millions of people, often for the flimsiest of reasons, and offer no appeal."


Predictive Policing

Police departments across the United States are using algorithms to predict where crimes are likely to occur and to target resources accordingly. However, these algorithms have been shown to disproportionately target Black and Brown communities, leading to increased surveillance, harassment, and arrests. This perpetuates a cycle of over-policing and criminalization that has devastating effects on communities of color.


These algorithms may be trained on historical crime data, which may reflect patterns of discrimination and bias in policing. If the algorithm is trained on data that reflects discriminatory practices such as racial profiling and over-policing in certain neighborhoods, it will disproportionately target those communities in the future, regardless of whether or not they have a higher crime rate. This perpetuates a cycle of discrimination and reinforces existing biases in the criminal justice system.


They even rely on data that is not directly related to criminal activity, such as social media activity or even music preferences. When you pair that with facial recognition, as O'Neil writes, "The question, however, is whether we've eliminated human bias or simply camouflaged it with technology. The new recidivism models are complicated and mathematical. But embedded within these models are a host of assumptions, some of them prejudicial." What this means is that using these automated solutions to predict crime can lead to false positives and unfairly target individuals who may not actually be associated with criminal activity.


Communities should be involved in the development and implementation of predictive policing algorithms to ensure the solution is actually solving what the communities are experiencing. In a court of law, the inner workings of these algorithms are not transparent, intelligible only to a small group of folks, which leads to folks being unable to correct misinformation.


"Opaque and invisible models are the rule, and clear one very much the exception."


What Can be Done? Transparency Matters.

O'Neil suggests that we need more regulation and oversight of algorithms, which in my mind means similar to the finance or legal industry. This would require more transparency and accountability in the development and deployment of algorithms, as well as more rigorous testing to ensure that they do not perpetuate biases and inequalities.


However, there are challenges to implementing these solutions. For one, the tech industry has historically resisted regulation, arguing that it would stifle innovation. Additionally, there is a lack of expertise in government and regulatory agencies when it comes to understanding and regulating algorithms.


Despite these challenges, there are organizations and individuals working to address algorithmic bias and promote racial equity. The Algorithmic Justice League, for example, is a nonprofit organization that advocates for more transparency and accountability in algorithmic decision-making. And individuals can take action by advocating for more oversight of algorithms and supporting organizations working to address these issues.

bottom of page