top of page

Feather Pending: Case Study on Tech Fail with Clearview AI Facial Recognition

  • Writer: Sarah Parker
    Sarah Parker
  • Jun 3, 2023
  • 5 min read

This post will be an academic paper type reflecting my research from my curiosity surrounding the funky airport facial recognition kiosks. Here's a glimpse at the work in progress:


Start Where You Are: A Film and Some Poking Around

I’m an accountant by training so I adore source documents. They provide transparency of a transaction and clarity of the details, so I have what is necessary for me to do my tasks and provide analysis for decision making. I use them to reflect historical data, allow for adjustments to be made to align future transactions with what the people in my company are trying to achieve, and are a tool to aid my critical thinking. Through my graduate studies, I have been taking a closer look at what goes on behind the user interfaces of technology such as when using facial recognition kiosks. I’m curious about the source documents showing where that data goes, how it is protected from hackers, if it’s resold, and who is benefiting from the use of this technology.

This has led me to an independent study on how the algorithms or the math equations that make this tech work are structured. In this instance, I wonder how this is affecting the large variety of people traveling and navigating these funky kiosks being implemented in airports. Since we all view and try to make sense of the world around us differently, just like I am helping my kids do, I expect to find bias. To me bias is the set of filters or lenses I view the world through, and moderate bias is neither good nor bad. It reflects how I perceive and understand the world around me at a given moment in time, and I challenge myself to keep my biases under constant revision as I gain new information. I wonder how these facial recognition kiosks are being instructed to practice the concept of adapting and incorporating new data too.



To start learning more, I watched "Coded Bias," a documentary film directed by Shalini Kantayya that explores the issue of bias in algorithms and artificial intelligence (AI) released January 2020. The film follows MIT researcher Dr. Joy Buolamwini as she discovers that facial recognition technology has difficulty recognizing darker-skinned faces and female faces, revealing a clear bias in the algorithms. The documentary also examines other examples of bias in AI, including predictive policing and hiring algorithms.


The film argues that algorithmic bias has serious implications for society, as these technologies are increasingly being used to make decisions that affect people's lives, such as hiring, lending, and policing. The funky airport facial recognition kiosks are just another example of how common these tools are becoming. The film also explores the lack of regulation around AI and the need for greater transparency and accountability in the development and use of these technologies. I wondered if this was true for the companies running the airport kiosks and if they were adapting, releasing algorithmic and database updates and such for their tools as they are processing a wider and wider range of people.

Throughout the film, Dr. Buolamwini and other researchers, activists, and policymakers work to raise awareness of the issue of algorithmic bias and push for change. They argue that it is essential to address bias in AI if we want to build a fair and just society. For me that meant talking about algorithms and racial equity with those around me. Did the people in my life have more context than I did? I was curious who was aware of what was going on behind the unassuming facial recognition kiosks that get you through the dreaded TSA security lines faster.


Mindset Shift: The Survey

To continue learning more and start conversations to find answers for my curiosity, I developed a series of questions folks could respond to after watching Coded Bias. I planned to host film showings to do as a friend group. I thought that because I talk to everyone, have a home where we may have a four person or a twelve-person dinner on any given day, and that I had this burning curiosity, that this would be an easy task. It was not. As the weeks crept by, I told myself I would at least ask friends, family, colleagues, classmates, and acquaintances to watch Coded Bias and fill out the short survey as convenient for them.

I was surprised by my resistance in making the ask. I was surprised by the whack-a-doodle excuses I came up with. ‘Now is not the right time for this person with everything they have going on in their life.’ ‘This isn’t an appropriate moment to bring these sorts of topics up because we are hanging out and trying to be off duty.’ ‘This person reacted poorly to a tangential topic; I don’t know enough about this tech stuff to debate with them.’ All of this seems like nonsense when I remind myself that those kiosks, the flights, and the travelers navigating these new tools haven’t stopped in the weeks I let slip by. I recalled the questions fueling my curiosity. Three years later, are there still some faces that are not identifiable or are misidentified while folks are navigating their travel experiences? Is this facial recognition technology just Google for faces or is it a privacy violation?

When I did a clearing exercise, working with myself to figure out what was blocking me from making the ask and unpacking my inner dialogue, I was looking for clues on where this hesitancy was coming from. When I figured out the pattern, I was not surprised. I said the quiet part out loud. I was being unintentionally racist. By not letting folks around me the opportunity to choose their level of engagement with content, by not asking questions, by staying small, quiet, and erring on the side of maintaining the peace of the moment, I was not looking closer at the systems and processes behind the user interface in the kiosks using facial recognition. I knew what I would find and was playing it safe, not engaging with the discomfort that would lie in the conversations.

I’m early on in my racial equity journey and still unlearning my expectations that I will feel like there is a right time to talk about modern racism or that I will feel ready and equipped with enough information to begin and stay in difficult conversations. But just like I wake up every morning and choose my partner, I wake up every day and choose to actively work on my ability to break cycles of trauma with the intentionality of my actions. I want to practice engaging with my discomfort and stay in conversations that challenge my biases and allow people opportunities to share the impact of the mistakes I make. Being with someone in their pain is more important to me than my intent or my comfort. That’s what I needed other to do for me when I was in so much pain, when my anxiety and depression were consuming me.

So, I took a deep breath and got out of my own way. I started making the ask. I pivoted my mindset and started the conversations I wanted to have around algorithms and racial equity. Out of the 50 people I intended to ask, I actually asked 32, and of those folks, 14 engaged with the film and survey by the time of this essay.




bottom of page