Case Study on Tech Fail in Social Context: Pennsylvania’s Allegheny Family Screening Tool
- Sarah Parker

- Apr 7, 2023
- 9 min read
Since an artificial intelligent (AI) algorithm’s entire world is built on the literal interpretation of the data you put into it, and it is through the data that we inadvertently teach AI to do the wrong thing. Things that have significantly harmful impacts in social context. The AI has no notion of context, definition, and interconnected meanings of words or concepts, but it does not have the necessary human element to adjust its outputs based on the nuanced complexities that make up our lives. This is not to say that there are not some incredibly neat tools that are data based and being developed utilizing AI, but humans must intentionally be selecting the right sized tool for the job and intervening when the results don’t make sense. There are powerful consequences of decisions driven by algorithms on people’s everyday lives if that is not done.
For example, in Allegheny County, Pennsylvania an algorithmic tool was designed with the intent to determine which reports of child neglect, such as inadequate housing to poor hygiene, were more critical to investigate by using the Allegheny Family Screening Tool (AFST). The idea is to help child welfare workers decide who gets the knock on the door and who doesn’t to refocus limited resources on higher risk cases. On the surface this seems like a practical use of technology to aid a social system that is already struggling to adequately respond and navigate their caseloads. By utilizing a tool to generate a prediction of the “risk” that social workers will need to place the child in foster care, I would think that this prediction would be hands down beneficial. When looking a layer deeper, defense attorneys, independent researchers, and communities are finding the intent of the algorithmic based tool is not in line with the traumatic experiences being lived through by the families being impacted by these seemingly innocuous reports.
In the 2022 Associated Press article An algorithm that screens for child neglect raises concerns, the authors investigate the ideas of fighting something families and their defense can’t see, “an opaque algorithm whose statistical calculations help social workers decide which families should be investigated in the first place” (Burke, 2022). Opaque in this situation refers to the lack of transparency in not even knowing that this tool is being used or having the ability to see the information that it is generating in their files. In its first years of operation, the AFST showed a pattern of recommending two-thirds of Black children for neglect investigation, when compared with half of all other children reported. According to the article, “the independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.” (Burke, 2022).
What this investigative journalism highlights for me is the lack of clarity and transparency in the use and application of this tool. Since family court hearings are closed to the public and the records are sealed, families, their attorneys, and the public are unable to identify first-hand any cases or details of who the algorithm recommended be mandatorily investigated for child neglect, nor any cases that resulted in a child being sent to foster care. Unlike credit scores, this score is not available for us to review for errors, much less the ability to appeal for mistakes or misinformation included in score generation. The source data from beginning to end is sealed.
Developers have the admirable intent of using such technology-based tools to make the agency’s work more thorough and efficient to investigate lifesaving reports of neglect. While I agree child welfare officials should use whatever they have at their disposal to make sure children aren’t neglected, the article points out, “scrutinizing a family’s life could set them up for separation.” AI algorithms and other predictive technologies can provide a scientific cross check for personal biases because they can calculate a risk score and remove an element of human error. Theoretically the output should be a hard truth you can hold up in court because the disparities were interpreted by an algorithm aka a math calculation based on historical facts aka public data. But who is included in that data?
These independent researchers found that, “for more than two years, a technical glitch in the tool sometimes presented social workers with the wrong scores, either underestimating or overestimating a child’s risk.” And don’t forget that most children flagged for investigation were primarily in black families. The article reports that County officials have since fixed the problems, but I wonder if fixing the AI algorithm really mattered, because of the very imperfect data systems it was designed from. “Critics say even if race is not measured outright, data from government programs used by many communities of color can be a proxy for race.” (Burke, 2022). Remember, AI’s entire reality is based on the data we feed into it.
The article goes on to highlight how officials dismiss social worker’s concerns when they ask why these families, majority of whom are black, were originally referred for investigation. When the social workers who are in the loop are encouraged to use their discretion to override the AI algorithmic suggestions for investigation, when they question the tool generating these recommendations, they are met with gaslighting and dead ends. In a 2021 memo, the U.S. Department of Health and Human Services cited racial disparities “at nearly every major decision-making point” of the child welfare system which means that the majority of data available from historical neglect cases has been generated from being “overrepresented in reports of suspected maltreatment and are subjected to child protective services (CPS) investigations at higher rates than other families.” (Gateway, 2021). To my understanding of how the details of these components are fitting together in a larger system, they appear to result in a broken process that is removing children from families already struggling. This is a process where the issues of the existing child warfare system are being coded into the solution, a well-intentioned solution, but one that is more efficiently creating harmful impacts for black families.
To my way of thinking, these are impacts that black families, their attorneys, the press, and even the social workers utilizing the tools are unable to troubleshoot. The difficulty, cost, and recourse to mitigate the harm that the AI algorithm is generating raises the barriers for a happy ending to a neglect investigation. Remember these are welfare investigations that began as reports of child neglect, such as inadequate housing to poor hygiene and not more grievous reports of abuse. So, this AI tool that was designed to help guard against subjectivity and bias is exponentially making the problem worse, reinforcing existing issues and making mistakes harder to fix.
The American Civil Liberties Union took notice of the use of AFST in Pennsylvania and states that if history repeats itself this time the results would be “deemed unquestionable truths supported by science and math.” (ACLU, 2023). Since you can’t get any information about what the algorithmic score says about you or your child, or how it played a role in the decision to investigate you, it is essentially coding in what should be policy decisions while circumventing the accountability that comes from adopting traditional policy.
What if there was more clarity surrounding what was being generated by Allegheny County’s AFST and a clear way to appeal and correct misinformation? What if there was transparency of the AI algorithms used, a disclosure of the data source and assumptions made in the development of the tool, and an error log to reveal corrections made from coding errors? If developers were open to community criticism, committed to showing their work, there was listening to critical voices, leadership decisions were made with confronting ethical and racial concerns center stage, and a mitigation plan in place for when mistakes are discovered, then a more sophisticated AI algorithm could be developed. But would it be enough?
Even if social workers are cognizant of the racial disparities in the underlying data the tool relies on, the local teams don’t appear to have much input into development, making them complex to interpret the results and difficult to explain to the families. If the scores generated were shared with transparency it may discourage people from seeking services and complicate the issues families face yet again. Additionally, as the ACLU article illustrates, the people with more contact with government agencies are the ones the database fed into AFST were made of. Meaning black families were overrepresented in the dataset because the government had more data on them and little on a random cross-section of the county’s population, white counterparts, and higher income residents. Moreover, as the article indicates, “many of these families do not even know the county is using the AFST, much less how it functions or how to raise concerns about it.” (ACLU, 2023). This is now a wicked problem.
Wicked problems are typically a social or cultural problem but are like untreated mental health issues because the root of the issue(s) have been ignored for so long. They are unstructured, open ended, multidimensional, systemic, and the issue may not have a known solution. As author Arthur Marshak wrote in Reflections on Wicked Problems in Organizations, “A great deal of organizational life and change theory is based on assumptions of rationality. That is, people will change if presented with a rational case for change or the proper facts and figures.” (Marshak, 2009). The AFST presents risk scores as facts for these child welfare investigations but are figures of misinformation.
The defining features of wicked problems are that they have no clear boundaries, there is a multitude of interaction/interdependence, and the values and realities being addressed are not easily agreed upon within the larger population. In other words, these are problems that have become dysfunctionalities of complex systems. The problem shifts over time so providing a rational case is extremely difficult. Introducing AI algorithms, a tech-based solution that a more select sliver of the general population can understand and interpret, adds a dangerous layer of complexity to the child welfare system .
One technique to mitigate wicked problems is to utilize systems thinking and agile methodology. Systems thinking will help you to understand interconnection points and their influence on other systems. This partners well with an agile methodology because of the iterative approach to design and produce/obtain an outcome. These two paired techniques will allow for improved solutions through collaboration. “This agile, collaborative environment breeds the ability to be efficient and effectively meet the stakeholders’ changing requirements” (Wong, 2020). There will need to be a lot of human based traits and soft skills such as compassion, empathy, coded into these iterative solutions to solve wicked problems inherent in the child welfare system.
I am grateful the ACLU recognizes that Allegheny County’s Family Screening Tool is a wicked problem. In their study they conclude, “These challenges demonstrate the urgent need for transparency, independent oversight, and meaningful recourse when algorithms are deployed in high-stakes decision-making contexts like child welfare. Families have no knowledge of the policies embedded in the tools imposed upon them, no ability to know how the tool was used to make life-altering decisions and are ultimately limited in their ability to fight for their civil liberties, cementing long-standing traditions of how family regulation agencies operate.” I am a mother, one who just spent the last five years addressing my postpartum mental health crisis as I started my own family. I am privileged to be able to afford, receive the care I needed, and have a support network that saw me through unlike many who struggle to provide their children adequate housing and proper hygiene. I am a mom who had to sort through crippling fears that the child welfare system would take my kids away because I was a bad parent during my postpartum years. But I am white. I am inherently afforded more grace in the system than the Global Majority. Knowing my kids and I are likely not ever to be reflected in the data that this AI algorithm is based on gives me personal comfort and ignites a new fear in me.
I have come to realize that my time in that fearful phase of the child welfare system is temporary. I have had the privilege to return to school and learn more about what goes on behind the user interface of the tech-solutions I use daily at work. I have gained knowledge of how technology is doing more than moving us from paper to tech-based processes. It’s forcing us to reorganize the systems we operate in and the harmful impacts our well-intentioned solutions are having on people’s lives. I think we currently have a unique opportunity to renegotiate the systems we have for solving problems of how family regulation agencies operate by utilizing advancements in technology. But prioritizing soft skills and focusing on the individual experiences as we build for the collective must be a guiding beacon. I know that the systems we allow to be coded into place in our society with AI algorithms like Pennsylvania’s AFST based on faulty logic and biased data are going to have anything but a temporary impact. This article helped me reflect on the privilege and power I have in a society implementing tech in their child welfare systems. I must actively be intentional as I contribute to solving wicked problems with technology and run towards my fears. Getting a glimpse of how the issues others who are fearing their kids will be taken away from them for being perceived as a bad parent makes me more willing to take risks that may put my individual comfort on the line and commit to better data vetting protocol in my work.
References
ACLU. (2023, March 14). How Policy Hidden in an Algorithm is Threatening Families in This Pennsylvania County. Retrieved from American Civil Liberties Union: https://www.aclu.org/news/womens-rights/how-policy-hidden-in-an-algorithm-is-threatening-families-in-this-pennsylvania-county
Burke, S. H. (2022, April 29). An algorithm that screens for child neglect raises concerns. Retrieved from Associated Press: https://apnews.com/article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
Gateway, C. W. (2021). Child Welfare Practice to Address Racial Disproportionality and Disparity. Retrieved from U.S. Department of Health and Human Services, Administration for Children and Families, Children's Bureau.: https://www.childwelfare.gov/pubs/issue-briefs/racial-disproportionality/#:~:text=Suggested%20Citation%3A%20Child%20Welfare%20Information%20Gateway.%20%282021%29.%20Child,Services%2C%20Administration%20for%20Children%20and%20Families%2C%20Children%27s%20B
Marshak, R. (2009). Reflections on Wicked Problems in Organizations. Journal of Management Inquiry, 58-59.
Wong, E. (2020, October). What are Wicked Problems and How Might We Solve Them? Retrieved from Interaction Design Foundation: https://www.interaction-design.org/literature/article/wicked-problems-5-steps-to-help-you-tackle-wicked-problems-by-combining-systems-thinking-with-agile-methodology

