top of page

Red Flags! Data led technology is leading families to mistrust, fear and avoid services

  • articlescsp
  • 11 minutes ago
  • 3 min read

Welcome to the CSP blog space where we invite guest contributors to reflect on issues that intersect with their research, work or activism.


Red Flags! Data led technology is leading families to mistrust, fear and avoid services

Val Gillies, Rosalind Edwards and Sarah Gorin


Artificial Intelligence is central to the current UK Government's vision for the future. The Labour government’s plan is to 'mainline AI into the veins of the nation’, unlocking deep efficiencies and saving billions of pounds.


This ambition builds heavily on the trajectory and infrastructure installed by previous Conservative administrations. Complex data systems have been reshaping citizen state relationships for over a decade, most notably in relation to the care and control of families. AI enabled tools are now widely used to monitor and profile households, allocate resources, risk assess and intervene. Most local authorities in the UK link together different administrative data trails on families. Many are operating large 'datahubs' or ‘datalakes’, pooling together information from a range of different government and commercial agencies to profile vulnerable families and predict future problems before they have happened.


Such systems are diverting eye watering sums of public money to tech companies, yet their functionality and value for money has yet to be established. There have been no independent evaluations of their performance or accuracy and little consideration of how these tools may be impacting on the lives of the families they target. The turn to data led systems has happened without any public consultation or discussion leaving parents largely in the dark about how Governments may be using their information.


Our mixed methods study explores how parents feel about these changes. We researched local authorities' data practices in relation to children and families and conducted a representative national survey of parents views about them. We also convened focus group discussions and interviews to examine parents perspectives and experiences. Our article, entitled "Red flags! Parents’ perspectives on data led policy and practice in family intervention", points to a huge and worrying disconnect between the values and expectations of parents and current institutional data practices. Parents do not feel they are being properly informed about how their information may be processed, shared and used. Relatively few understand the capacities and potential impacts of this technology and there is little support for its widespread use. Parents attach strong value to the principle of consent in guiding data management. Many expressed shock and anger on finding out their data is likely being shared without their knowledge.


More concerning still are the experiences of parents who had been profiled and targeted by data driven systems. Interviews with service users paint a disturbing picture of how data tools can leave families vulnerable to punitive interventions and serious harm. Numerous parents described harrowing experiences of being investigated, falsely accused and stigmatised on the basis of long past incidents, data errors or misinterpretations. In two cases children had been wrongfully removed and placed in care before being returned by the court. Parents in our interview sample were left feeling scared and powerless. They struggled to access basic rights as data subjects and found they were unable to challenge the warped and partial impressions generated by their profiles even when even when explicit errors of misunderstandings were identified.


Our research provides clear evidence that such families are disengaging from crucial services out of fear their data will be linked and profiled. Parents described actively avoiding hospitals, removing their children from school and self excluding from mental health care in an attempt to avoid triggering further interventions. This reflects low levels of trust in data tools among the broader population, with a majority of parents feeling data tools are potentially discriminatory.


The current hype surrounding AI and the technocratic fantasies it supports may be captivating politicians but it is not fooling the families it is tracking and profiling.


By abstracting and sorting parents’ data into classifications, scores and ‘red flags’ social and cultural meaning are stripped out leading to skewed interpretations and inappropriately heightened impressions of risk. Data tools present a simple and actionable account to professionals, but it is far from a reliable one. The aim of the current government is to go further and faster in embedding these automated systems into the public sector. As we outline there are real human and social costs of increasing reliance on this technology for welfare delivery. Families who are harmed rather than helped by state services are likely to disengage altogether, exacerbating the social problems such systems purport to fix.


(Image credit: "Close up of Computer Hardware" by panumas nikhomkhai is licensed under Free to use)



bottom of page