a doula combining Science and spirituality, research and Intuition..

Risky modelling for child abuse: could these methods actually increase abuse, maltreatment and violence?

“What kind of love is it that has violence as a silent partner?” Keri Hulme- The Bone People

This week the Guardian reported “Vast quantities of data on hundreds of thousands of people is being used to construct computer models in an effort to predict child abuse and intervene before it can happen” https://www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse

The software can be used to generate revenue for the council through the Troubled Families payments-by-results scheme. Under the Troubled Families scheme, councils are paid £1,000 for each family they sign up to the programme, with a further payment of £800 when the family meets certain criteria.
This move is described as a way to reduce costs in a context of increasing austerity, cuts to services and deprivation within a neo-liberal agenda. There are calls for complex thinking and modelling increasing use of technology

https://www.gov.uk/government/speeches/matt-hancock-my-priorities-for-the-health-and-social-care-system

Should the UK be increasingly going down the road of predictive modelling? Is there any evidence that this approach can pre-emptively predict abuse risk and prevent outcomes- complexity thinking often finds unintended negative consequences when changing complex system
This topic has been extensively explored as a predictive modelling system for child abuse was created and modelled in New Zealand
http://www.reimaginingsocialwork.nz/2015/05/children-at-risk-and-the-ethics-of-predictive-risk-assessment/
There are many possible serious consequences to trying to model risks to children of future child abuse:

1 Ethical and legal: Is it ethical or legal to share or use peoples data in such a way?

it may also not be legal under GDPR (which asks that ‘You can only use the personal data for a new purpose if either this is compatible with your original purpose, you get consent, or you have a clear basis in law’.
Who will be able to access this data, and what will it mean for people and families who are scored in a high risk category?

Issues relating to the ethical principles:
Existing risk instruments lead to an unacceptably high level of false positives (families inaccurately deemed to be high risk) and a high level of false negatives (dangerous families wrongly judged safe)

2 Stigmatisation of already vulnerable and marginalised groups:
“As the variables used rely more heavily on data about mothers (as is more available in the data), and use socio-economic status (SES) as a variable, female caregivers will overwhelmingly be identified as ‘risky’” (Keddell 2015)
This is likely to result in increased public stereotyping and rejection “assigning the label ‘risky’ permanently to people who have not harmed and may never harm their children based solely on statistical association.” (Keddell 2015)
Whilst we might know factors that are associated with abuse, these are NOT causal. Labelling people as ‘at risk’ to their children is likely to reinforce existing structural inequalities

3 Individualisation and increasing risks:
“Where risks are individualised, for example, this clearly reflects a neo-liberal concern with personal responsibility and a limited role of the nation state.”
So we ignore the fact that people that are living in areas with very real risks and concerns, noise, pollution, debt, violence, food insecurity, disability, lack of access to healthcare, education, culture, art, community, safety and instead blame them for their individual ‘choice’ to engage in any kind of perceived ‘risky’ behaviour. This takes responsibility away from the state for having to work to reduce inequality and improve living conditions.

4 Complexity and ‘machine’ learning:
Big data is seen as an answer to the huge issues we as a society are facing but data is useless if we are not asking the right questions or if we are unable to understand, interpret or use the results. The complexity of the methodological processes make it difficult to predict the ethical consequences of big data systems
“The whole point about big data analytics is that the number and form of calculations that need to be carried out exceed the scale and complexity which people can comprehend directly” (McQuillan 2018)
McQuillian goes on to equate this to the use of drone technology. If we risk score populations then we are removed from the individuals and the real consequences to them, both of knowing they are believed to be ‘a risk’ but also of their story and personal circumstances, which could result in dehumanisation.
Unconstrained machine learning can become a drone perspective, a targeting gaze that blurs legality and divides the social along decision boundaries of “us and them.”

5 Unintended consequences:
Another recent example of risk modelling looked at teen dating behaviour and whether a theoretically informed, empirically based algorithm that could adequately estimate the likelihood of physical and sexual TDV perpetration during vulnerable developmental periods. The study found that adolescents with positive test results on the algorithms were over twice as likely to perpetrate dating violence over the course of 6 years. (Cohen, Shorey et al. 2018)
The authors themselves, and commentators on the study recognise that labelling teenagers as potential future perpetrators of dating violence could be highly stigmatizing and lead to a self-fulfilling prophesy where individuals believe they are destined to enact violence. (Thurston and Howell 2018)
This sort of modelling is also used in the criminal justice system and has historically led to increasing fragmentation and alienation of those deemed ‘risky’ (for example ethnic minorities, those with serious mental health conditions, children leaving care etc.) and increases in the very behaviours which are being flagged in this misguided effort at prevention
“The policy implications of these findings are stark. Developmentally speaking, experiences of reduced fairness correlate with social isolation, deprivation of dignity, reduced faith in public institutions, and an increased propensity towards activities and behaviours deemed risky or socially non-normative” (Nichols 2017)
Once a person or placed is categorised as ‘at risk’ is it possible for that label to be removed? And under what conditions. This is also important in communities
“It is increasingly common to use measures to determine relative degrees of vulnerability across a particular institutional geographic context (e.g. the designation of Neighbourhood Improvement Areas and Vulnerable Schools by the municipality and the school board, respectively). Officially, the scales are used to ensure equitable distribution of limited resources, but they also result in particular spaces being coded institutionally (through crime or school achievement data, for example) as vulnerable or unsafe. These designations justify the use of place-based public sector interventions that are not always experienced as resources or supports by people who become their focus.” (Nichols 2017)
I presented at the Public health England conference last week about an asset based community development project on the Wirral that I am evaluating. Interviewing individuals across the area it is clear to see people are very aware of the reputation of the area that they live in.
Also at the conference I attended a session on using data to model future trends, this is increasingly used by councils to plan services and predict trends:

It’s also possible that the current interest in screening for adverse childhood experiences (ACE) could be used to feed into these models rather than thinking ‘what happened to you’ but what who might this mean you could harm in the future?
The data used to feed into these algorithms are what we know about associated risk factors for child abuse and maltreatment, upstream efforts should be looking at what we already know, not focusing on identifying families where abuse is occurring but looking at the wider environment around the family and prevent the conditions that lead to abuse occurring. (Keddell 2018)
“Since rates of maltreatment decline as material supports increase (Pelton, 2015), efforts to reduce poverty must be pursued” (Gillingham 2017)

Conclusion
Risk modelling to prevent child abuse, but also the wider uses of big data and machine learning need to be robustly evaluated to evaluate whether it is ethical and results in better predictive values or improved outcomes or unintended increases in risk.
Safeguards should be in place to prevent data being manipulated in this way without consent and for public discourse about the wider societal ramifications of using big data and machine learning (Amrit, Paauw et al. 2017)

References
Amrit, C., et al. (2017). “Identifying child abuse through text mining and machine learning.” Expert systems with applications 88: 402-418.

Cohen, J. R., et al. (2018). “Predicting teen dating violence perpetration.” Pediatrics: e20172790.

Gillingham, P. (2017). “Predictive risk modelling to prevent child maltreatment: insights and implications from Aotearoa/New Zealand.” Journal of public child welfare 11(2): 150-165.

Keddell, E. (2015). “The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool?” Critical Social Policy 35(1): 69-88.

Keddell, E. (2018). “The vulnerable child in neoliberal contexts: the construction of children in the Aotearoa New Zealand child protection reforms.” Childhood 25(1): 93-108.

McQuillan, D. (2018). “People’s councils for ethical machine learning.” Social Media+ Society 4(2): 2056305118768303.

Nichols, N. (2017). “Technologies of evidence: An institutional ethnography from the standpoints of ‘youth-at-risk’.” Critical Social Policy 37(4): 604-624.

Thurston, I. B. and K. H. Howell (2018). “To screen or not to screen: overreliance on risk without protective factors in violence research.” Pediatrics 141(4): e20180075.

Further discussion of UK councils using data to model child abuse risk
https://www.youtube.com/watch?v=LG5Phrq6mr0

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s