5 lessons for implementing predictive analytics in child welfare

In child welfare, one problem is the accurate identification of children at risk of maltreatment, work that requires a gauge of not only immediate risk, but also the future likelihood of harm. Predictive risk modeling (PRM) offers new and exciting chances to solve entrenched problems like this. PRM enables child welfare staff to identify earlier those individuals who are at long-arc risk of adverse outcomes and help them avoid the adverse event.

Rhema Vaithianathan, Co-Director of the Centre for Social Data Analytics at Auckland University of Technology, shares five lessons she has learned about implementing PRM.

Fully Integrated Data is Not Necessary

An accurate and useful predictive model can be built without fully integrated data, as long as we can access a comprehensive, state-level child welfare data set with sufficient historical information, we can build an adequate predictive model.

Frontline Practice and Priorities Must Lead

Not all possible uses of PRM will be ethical or desirable. Each model is built for a specific use and for a specific jurisdiction, and will be validated accordingly. So before embarking on building a PRM, it is important for the leadership of the county or state to set parameters on how it will be used. Established practice can run deeper than an agency is aware, so rather than looking for high levels of change in frontline practice within a short time frame, we should look for a trend of continuous change in the right direction.

Ethics and Transparency are Never “Done”

Ethical governance needs to be built into the agency for the lifetime of the tool; regular ethical reviews are essential for the maintenance of community support.  As the project continues, transparency should also be revisited often to make sure that the tool is understandable to the community, agency and frontline workers. If it is not transparent, it is hard to gain necessary trust and support.

Expect Methodology to Evolve

A natural evolution of methodology should be expected and encouraged up to and after the implementation of a model. Looking carefully at the performance and usefulness of the model as it takes shape should cause a regular review of the choice of methodology.

Independent Evaluation Sharpens the Focus

The fact that a predictive model will be independently evaluated helps to build trust and support for the project. Committing to an independent evaluation also forces researchers and the agency to be clear about what the tool is setting out to achieve from the start, creating an agreed-upon measure of success.

 

Read the full article here.