Commentary: AI can help fix what's broken in foster care
Published in Op Eds
President Donald Trump's executive order directing states to deploy artificial intelligence in foster care isn't just welcome—it's overdue.
The provision calling for "predictive analytics and tools powered by artificial intelligence, to increase caregiver recruitment and retention rates, improve caregiver and child matching, and deploy Federal child-welfare funding to maximally effective purposes" addresses real failures in a system that desperately needs help.
The foster care system's problems aren't hypothetical. Caseworkers manage 24-31 families each, with supervisors overseeing hundreds of cases. Children wait years for permanent placements. Around 2,000 children die annually from abuse and neglect, with reporting gaps suggesting the real number is higher. Overburdened workers rely on limited information and gut instinct to make life-altering decisions. This isn't working.
AI offers something the current system lacks: the ability to process vast amounts of information to identify patterns human caseworkers simply cannot see. Research from Illinois demonstrates this potential. Predictive models can identify which youth are at highest risk of running away from foster placements within their first 90 days, enabling targeted interventions during a critical window. Systems can flag when residential care placement is likely, allowing caseworkers to connect families with intensive community-based services instead. These aren't marginal improvements—they represent the difference between crisis response and genuine prevention.
Critics worry AI will amplify existing biases in child welfare. This concern, while understandable, gets the analysis backwards. Human decision-making already produces deeply biased outcomes. Research presented by Dr. Rhema Vaithianathan, director of the Centre for Social Data Analytics at Auckland University of Technology and lead developer of the Allegheny County Family Screening Tool, revealed something crucial: when Black children scored as low-risk, they were still investigated more often than white children with similar scores. Subjective assessments by overwhelmed caseworkers operating without adequate information lead to inconsistent, sometimes discriminatory decisions. It exposed bias in human decision-making that the algorithm helped surface.
That's AI's real promise: transparency. Unlike the black box of human judgment, algorithmic decisions can be examined, tested, and corrected. AI makes bias visible and measurable, which is the first step to eliminating it.
None of this means AI deployment should be careless. The executive order's 180-day timeline is ambitious, and implementation must include essential safeguards:
Mandatory bias testing and regular audits should be standard for any AI system used in child welfare decisions. Algorithms must be continuously evaluated for disparate racial or ethnic impacts, with clear thresholds triggering review and correction.
Human oversight remains essential. AI should inform, not dictate, caseworker decisions. Training must emphasize that risk scores and recommendations are tools for professional judgment, not substitutes for it. Final decisions about family separation or child placement must rest with trained professionals who can consider context algorithms cannot capture.
Transparency requirements should apply to any vendor providing AI tools to child welfare agencies. Proprietary algorithms are fine for commercial applications, but decisions about children's lives demand explainability. Agencies must understand how systems reach conclusions and be able to articulate those rationales to families and courts.
Rigorous evaluation must accompany deployment. The order's proposed state-level scorecard should track not just overall outcomes but specifically whether AI tools reduce disparities or inadvertently increase them. Independent researchers should assess effectiveness, and agencies must be willing to suspend or modify systems that don't perform as intended.
The alternative to AI isn't some pristine system of perfectly unbiased human judgment—it's the status quo, where overwhelmed caseworkers make consequential decisions with inadequate information and no systematic oversight. Where children fall through cracks that better data analysis could have prevented. Where placement matches fail because no human could possibly process all relevant compatibility factors. Where preventable tragedies occur because risk factors weren't identified in time.
Implementation details matter enormously, and HHS must get them right. But the executive order's core insight is sound: AI and predictive analytics can transform foster care from a crisis-driven system to one that prevents harm before it occurs. The question isn't whether to deploy these tools, it's how to deploy them responsibly. With proper safeguards, AI can address the very problems critics fear it will create.
America's foster children deserve better than the status quo. AI gives us a path to deliver it.
____
Maureen Flatley is an expert in child welfare policy and has been an architect of a number of major child welfare reforms. She also serves as the President of Stop Child Predators. Taylor Barkley is Director of Public Policy at the Abundance Institute, focusing on technology policy and innovation.
_____
©2025 Tribune Content Agency, LLC.






















































Comments