PODCAST DETAILS

ADM in Child and Family Services: Mapping what is happening and what we know
6 June 2022
Speakers:
Natalie Campbell, RMIT University
Professor Paul Henman, UQ node, ADM+S
Dr Joanna Redden, Western University, Canada
Assoc Prof Philip Gillingham, UQ node, ADM+S
Prof Rhema Vaithianathan, Auckland University of Technology
Carol Ronken, Director of Research, Bravehearts Australia
Listen on Anchor
Duration: 31:42

TRANSCRIPT

Natalie Campbell:
Welcome back to the ADM+S podcast. In this episode, we’re spotlighting a discussion on ADM in child and family services. ADM+S Chief investigator Professor Paul Henman hosted this discussion, with Dr Joanna Redden from Western Universty Canada, Professor Rhema Vaithianathan from Auckland University of Technology, Carol ROnken, Director of research at Bravehearts Australia, and Associate Professor Philip Gilingham from the University of Queensland.
In this episode, we’re revisiting the key takeaways of how ADM is being used in child and family services, and the associated legal, ethical, organisational, and data challenges that this ensues.
To kick off the conversation, CI Professor Paul Henman:

Paul Henman:
Just to get a sense of shared knowledge of where we’re coming from, I guess, that child and family services can cover a wide range of services. But typically often people think of child protection focusing on mainly identification and management of child abuse and neglect, including how out-of-home care can also include parenting, education and training, and family support, very much. Strongly, a sector that has a lot of human service professionals and both a contribution state and non-state actors, particularly in Australia and the UK.

On the other hand, automated decision making, when we think about it, is the use of digital technologies and algorithms to automate all, or parts of, human decision making and in areas of child and family services. The risk assessment has been a very important part of that, and that risk assessment is very much focused on decision support, skills, tools, and a long-standing, registered process of structured decision making. But there are obviously other areas that we maybe think, that automated decision making may become part of or may be seen to be helpful in the area of child and family services.

Natalie Campbell:
The first speaker is Dr Joanna Redden, explaining findings of her research and involvement in the field, associated challenges, and benefits of ADM methodologies.

Joanna Redden:
The research that I’m going to draw on today comes from a few projects. Mainly the data scores governance project which Paul mentioned, we were presenting on in Montreal and that project concluded. The research was published in 2018 and academic articles followed from the publication of the report, and that project was really focused on mapping and analysing how risk assessment or different kinds of scoring systems were being implemented by local authorities in the UK. Since then we’ve had a follow-on project that we’ve been working on, which has been looking at civic participation. So how to advance civic participation with the ways that governments are making use of data systems, and then more recently I’ve been looking at with my colleagues, where and how government agencies are piloting and then deciding to cancel their use of different kinds of data systems.

The project involved a range of methods. So we held multi-stakeholder workshops with practitioners, with members of civil society organisations and community organisations, as well as academics. We did desk research. So this involved automated searches which we modelled this on the algorithm tips project in the United States, where we scraped a range of government websites for documents. We submitted 423 freedom of information requests to try to get more information about what was happening and where, and then we did more detailed case study investigations looking at six different types of data systems in use by local authorities. We also did interviews with public officials and practitioners, as well as with members of civil society organisations – so 27 interviews. And we built what we call the data scores investigative tool in order to share the information that we gained in our documents with others, in the hope of advancing more debate and investigation in this area.

So, in terms of general findings we found that applications were being used across different areas of government services. So child welfare, social care policing, and fraud detection. We identified the development of data warehouses and expanded data sharing arrangements in order to enable greater uses of data. We identified different kinds of systems, so systems that were being used for population level analytics, for future planning and resource allocation, to systems that were developed to risk assess populations or to risk assess families and individuals. We found an ongoing tension, speaking quite broadly, between the needs and the intentions of local authorities and councils to do more with less, and the rights and democratic principles that were being raised by civil society organisations who were concerned about increasing data uses across different areas. We found quite broadly speaking, that applications and levels of transparency were highly context dependent. We found a range of different approaches to making use of data systems, from public-private partnerships to in-house development, to systems being purchased off the shelf and then implemented for a local authority’s own needs.

Overall and across the board, we didn’t find much effort to measure the impact of the systems on frontline staff, on resource and service allocation, or on service users.

so, if I focus here on systems of thought, what we found is that local authorities were turning to these systems largely because of the austerity program introduced by the conservative government post-financial crash, which saw some local authorities in England see their budgets cut by as much as a third. As a result, local authorities were trying to help more people with less, and these systems reviewed as a way to provide more targeted services. I already mentioned briefly in terms of policy, the troubled family’s program was a driver for the implementation of these systems. The program was introduced in 2012 with funds set aside for local authorities that were able to identify families that were labelled as troubled. The term itself is problematic, but families were labelled as troubled based on a set criteria. The goal was to enable more direct interventions, and local authorities were compelled to do this because being able to categorise families as troubled could lead to more resources, which were desperately needed in terms of governmentalities. I should say, rights organisations and civil society organisations raise concerns about this kind of labelling and in-datafied landscapes, because of how easy it is for labels to stick with people and become amplified in ways that can affect opportunities.

Concerns have also been raised about the extent to which this system individualises social problems by stigmatising families, and directs attention away from the wider social contacts that lead to family crises in terms of governmentalities and legalities. We found that a duty of care was used as the main justification for using predictive risk assessments, but that this was being challenged by some who are arguing about greater debate, about rights and whether or not there are adequate protections of people’s rights whose data is caught up in these systems. Particularly since the potential harms of being labelled or being risk assessed weren’t being investigated. Though we also identified through the different applications and the different ways that local authorities went about engaging with consent. For example, Manchester actively sought consent from those whose data was used in their system, whereas in Hackney there was a decision not to seek consent because of the way that it was perceived, that might compromise the effectiveness of the system. And so what we found was that there are differing attitudes about the need for subject consent which suggests divided opinion in these areas.

Natalie Campbell:
Moving on, Associate Professor Philip GIlingham addresses How and Why, do we need ADM in child protection services? And how a lack of transparency in ADM innovations creates hurdles for all those involved in the system.

Philip Gilingham:

So, where is ADM being used or touted to be used to child and family services. I mean obviously you refer to the journal about what’s going on in England in particular, but there’s a predictive risk model in New Zealand. I mean, that’s there in blue as in Amsterdam, because they never actually got implemented. Hackney as Joanna’s already talked about, that’s red because it stopped due to a public outcry over privacy. A system they had in Chicago, that worked for a few months and then started making some really strange predictions about who was at risk, who wasn’t at risk, so it was discontinued. But unfortunately, this is a point that comes up in this work all the time. There’s no transparency about any of that. It’s very hard to find any information, partly because it was a private company that developed the algorithm. So, they don’t want financial mistake in the whole thing.

Now, in terms of how it’s being used, it’s hard to say because of all the secrecy, lack of transparency. But Johnson Peterson interestingly enough, says it could be used for all of the above, and he talks about actually getting rid of social workers and using these predictive things to match clients to services. And that’s all we do. But this gross underestimation of what social workers actually do, I mean Joanna has already talked about the vulnerable families initiative which I think has wrapped up last year, possibly this year. A lot of it is about risk assessment and giving people scores, as Joanna said. But also, it’s about using proxies for child maltreatment. So, what can people hold in on as something we can predict, like re-notification or re-substantiation, and so on and so forth. And just recently, what works centre for social chair in London undertook about eight months projects with four local authorities, to try and develop predictive models and summing it all up. It was none other than what i’ll say they could produce, considered to be sufficiently accurate in their predictions, to be of any sort of practical use for anyone. And I was very surprised by that because I tried to follow their process and meet with them last year about all of this, and I thought if anybody can do something with this stuff, it would be them. But apparently not.

There isn’t a lot of research about how professionals and administrators engage with ADM. My research with the STM tools was fairly close, but it was 13 years ago, so we need to have another look at that. Certainly, the SDM tools were embedded in the information system that protective workers were using. So they couldn’t avoid them. But then again they didn’t use them in their decision making as such. And there is the consultants as well, but in Philadelphia, one of the papers in 2017 – I haven’t put references in this written document – but up to 25 percent of recommendations when the family supports, were being ignored.

So what data research knowledge are used? Well, it already talks a bit about the limitations of administrative data and that’s what we’re having to, you know, people are having to use at the moment, because that’s what’s there. But there’s an argument for saying, okay see, look at the research. What are the predictors of treatment? Not within administrative systems but in the general population. And trying to sort of hone in on some factors that we know from research, are actually causative rather than just correlated. Because sometimes the correlations can be a bit misleading. It might lead back to price involvement and indications of policy, and how many people does it – we’re going to get it to differentiate between people. We need to focus on the other people, so some of them say it’s about poverty, single parenthood. So, there’s a lot of people like that in society who don’t use their children. So, it’s only telling us what we already know. And another piece, or just an observation really, from a study that Tim was doing over at ANU, that they played around with different predictors. But they found that most of them didn’t really affect the accuracy. So, what he’s arguing is there’s usually two or three predictions that he predicts, the rest of them, there’s some correlation there but not really strong enough to make any difference. Whether you put it in or you don’t. So claims that say oh you know, we use 30 different data points. Well yeah, but show me what correlations are, how many measures, how can you quantify it, isn’t a question but it comes back to the transparency argument. Because people aren’t necessarily transparent about what we’re doing.

So, just to finish off, I mean some of the questions that I formed in my mind since I started doing this work a few years ago; Why do we need ADM in child protection services? I mean, what are we trying to improve? We know that decisions are subjective and biased and so on and so forth when it comes to really serious cases, that’s usually mitigated by involving a whole team of people. So, why do we need it? I mean, what is it going to give us over and above what we’ve already got? Will ADM ever be as accurate as practitioners? I mean, bear in mind that at the moment we’re having to use data sets that were created by practitioners and people saying what, 60, 70, 80 percent accurate? Well, that’s not as accurate as us because you’re saying that the data you’re using is actually 100 percent, and again that might be very misleading as well, because outcomes frequently aren’t recorded. There’s also an argument that this applies to structured decision making as well, and so we need to check as we roll out ADM’s, do these things oversimplify the complexities involved with dealing with child maltreatment? And there’s an obviously yes, they do. I just need graphic work. I watch what I can see, I go around with them, I know what they’re doing. Anyway, when you then look at the file, none of the complexities that are dealing with the tensions that are actually in the file necessarily – also with the structured decision-making stuff – after people have been using it for about a year they sort of got a bit weary of it.

They’re the really hard cases that things like ADM’s just wouldn’t necessarily, and it’s about what information forgot.

Natalie Campbell:
Prof Rhema Vaithianathan brings a different angle to the discussion, contending that ADM in child and family services must be assistive technology, not completely autonomous.

Prof Rhema Vaithianathan:
So, just to be clear, even though this session is called ADM automated decision models, none of the tools that I deploy, our centre deploys, replaces decision making. They are really only ever a decision support tool. So they always have what we call a human in the loop. Now, with these sorts of tools, there is no intention to automate them.

So, let me start with what is predictive risk models and the kind that we work with. There’s three sort of features about them that’s different to people who are used to more structured decision models, where a caseworker might fill in fields and that work that Philip was talking about in Queensland, you know, where people said they’re stating the obvious. Well it’s not that they’re stating the obvious, except that caseworker who is being asked to make a decision is filling in the fields from their own observations and their own judgment. So, in a way it is taking what the case worker knows and putting it down on paper, weighting them maybe. And then developing a risk assessment. So the tools that predictive risk model – think of it as the same sort of thing but rather than using the caseworker’s own form-filling, it goes into data systems. Often just the child welfare sequence or serious system grabs the history of each of the people on the case, and then kind of pre-populates those fields and uses those weights to try to generate a risk of adverse harm. So, what can PRM do? Well, the agencies that I work with and most of my workers in the US are looking at it, maybe three US cases at the moment, trying to use these tools to support caseworker decision making. So using a risk score to help them in their assessment of risk and safety, I provide an indicator of case complexity. The scores can often actually say, because they draw on complex system interactions of the people on the case, it can allow supervisors and others to know how much collateral they should expect from a case. Because we know that one of the real challenges for a lot of our case workers is that they have to go and look for the history. And to complete those history, it’s useful to know- should I be seeing a lot of history as a supervisor? When you’ve got a junior caseworker who is going out to do the site visits, you need to know whether they really ought to be coming back with a lot more information, a lot of collaterals to contact, or whether there is much fewer. So these are ways that these tools are used and of course at the end of the day, it does help both of the men, both at the policy level, the decision making level, because it sort of summarises a large amount of data into a single index. So what is the motivation? There’s a lot of different ways I could frame the motivation but one of the concerns that a lot of the agencies we work with have is well explained by Brian Taylor, who talks about the fact that whenever we have a tragedy in child welfare, whenever a child dies, when we go back and look at the challenges and what was happening, what we often end up with is asking organisations to gather more and more and more information. So, they tend to really emphasise, and i’m sorry this is covering the quote, but what he’s pointing out is that we emphasise the gathering of information but not the processing of that information. And we leave that very much up to what we call professional discretion.

One of the places that we’ve been working on a lot, is at the front door of child welfare. One of the reasons we’re really interested in that is because we know that as concerns of child welfare increase, as child maltreatment gets into the front page of our newspapers, what our communities tend to do is just start referring more and more children into child welfare. So, if you look at the data in the US, one in three American children will be investigated for abuse and neglect before they turn 18. and for African-American and black children, that’s one in two. So, if you imagine a community where one in two is having the kind of invasive investigation for abuse and neglect that they are being subject to, you can understand why those thoughts of very invasive systems are not going to achieve a lot. What is happening is our child welfare system, that was really designed for a kind of minimal touch to look for severe abuse and neglect, are now becoming much more involved in families lives. And one of the reasons is that, at the front end we have provided very few tools for people to make the screening decision.

With fairness, one of the biggest concerns we have is surveillance bias in child welfare data systems.

So, the next steps that we’re working with is really about the challenges of supervision. If anyone saw the movie the Trials of Gabriel Fernandez, it was a terrible story about a young boy in a LA county, who was killed by his mom and stepfather, and who had multiple call-ins to child welfare, very actively involved in child welfare. And a really interesting case, and the most striking part of the case is that the supervisors were taken to court over this for negligence, and maybe even manslaughter. And the trial, it’s a Netflix documentary about that. So, one of the things that they asked Greg Merrit, who was one of the supervisors, is why were you unable to keep eyes on this family as much as you needed to, because clearly people were not doing the things they were doing. The child welfare system was not serving this family well. And he sort of replies, I think the most children I had at any one time was about 280 children, and the interview asks how difficult is it to keep track of that many cases and he says you can’t, you don’t know every case you’re supervising, because there are so many and my workload was so immense. So, we’re actually working with some counties who have this problem to build out a sort of complexity flag using similar principles that give a supervisor a sort of single deck dashboard like that and allows them to drill down into cases and find out what’s happening and how collaterals are being contacted, and also to look for history of those cases. So, those are sort of some of the work that we’re doing.

Natalie Campbell:
To round out this wonderful discussion, Carol Ronken provides an overview of Bravehearts Australia, and her first-hand experience of ADM systems at play in the sector.

Carol Ronken:
I’m going to start this off with a little bit of a disclaimer. I’m probably one of the most technologically challenged people that you’ll ever meet, so if I am using terms incorrectly, I do apologise. Having said all that, we have been looking at automated decision-making tools a number of years now, it’s sort of something that Bravehearts began looking at in response to some issues and concerns we had about, particularly the family law system

One of the biggest issues that we’ve been looking at over the last few years is around the lack of information sharing. The difficulties around assessing risks for children, particularly when we consider looking specifically at child sexual assault, that often there are no physical signs. So often there is nothing that sort of stands out. You know, physical abuse you can often see physical signs of civil harm to a child. With sexual abuse you very rarely do get those indicators. We were particularly looking at the family law system, but at the same time, having thoughts around the child protection system. More broadly around the fact that there are so many actors, I guess, in a child’s life who see and observe behaviours, and indicate little red flags that might indicate that this child is at risk or is it harm. And those, for us, was something that’s really important in looking at the risk for abuse and neglect for children, and how we can actually assess that risk.

So, a few years ago were talking with a company and looking at developing a tool that would allow a number of different players to be able to input information and observations that they saw into – at the time we were talking about an app – and to be able to have that sort of straightaway central child protection services. Once it reached a certain risk level it would then trigger for human intervention, for someone to look at the files and look at what the actual observations and information being provided were, and whether or not it did signify that there needed to be an intervention for us. That early identification intervention of vulnerable children is absolutely critical, because it reduces the occurrence of child abuse and neglect more broadly and can definitely improve the life trajectories of children.

We definitely see that there are huge benefits to automated decision making. Having said that, we also have a lot of questions and considerations. Now we know that a lot of commentary around this is that there is so much human bias and erroneous judgments made by those who are working in child protection, for all sorts of reasons. And the case of Gabriel Fernandez is a really interesting one, and I’m really glad that was brought up. I’d watched that documentary a while ago and it’s really challenging, because these workers have such high caseloads and trying to be able to manage all of that and to keep on top of it all is absolutely huge. So, for us, ADM sort of allows for that potential to be able to spot patterns and correlations in information and data sets, around behaviours and observations, around children, and objectively assess or profile the level of risk involved.

I do think that data that’s around a child, from more than one source, will help create a bigger picture around that child’s risk. So, we know that there are so many actors in a child’s life; their teachers, their childcare workers, their sporting coaches, all these people in a child’s life who may be seeing those signs and indicators. They might be small things that they don’t actually trigger the thought of having to report that this child may be at high risk of harm, but as we always sort of say around child sexual assault, it’s like a jigsaw puzzle. All these people have these little pieces of the puzzle and being able to bring all of those pieces together is really important to building a proper picture of what is happening in that child’s life. I think for me also, there is the need to understand and have transparency around what those predictive algorithms look like. So, what are the inner workings of those models and those tools, that may be being used or considering being used. And I suppose and this is me – I’d sort of expect that for many people in the sector having that presented in a really simplistic, and easy understood way, would certainly help raise confidence in the sector around the use of predictive tools in assessing child risk in protective services.

On ethics and privacy, I think is also a really interesting question around who has access to that information, the level of information that is shared to facilitate that intervention. I think that there are those legal and moral privacy issues that we do need to think about, and then choice as well. The right to confidentiality. And as someone who works in the sector, I always sort of think that, we need to put the best interests of a child first. But I still, as a very sort of left-leaning social justice type of person, think well, what about individual rights to privacy and confidentiality around their information? And I think that really does need to also be considered. And then oversight I think, is the last thing I just wanted to mention. And I have no idea how long I was speaking for, but for me oversight is also really important. And I like the idea that these tools are thought of as being supportive of decision making workers, rather than as the decision-making tool, because I think it’s really important that there is that human intervention, that you know, once a level of certain risk might be reached, that someone is able to look at the information and pull it apart to see whether or not there are legitimate concerns there, and what level of intervention is required. I do, I found it really interesting around this whole child protection system area, but I think I would also suggest that we do look at the family law system as well. It certainly is something that here in Australia, is being focused on a lot at the moment. And there are huge concerns around risks of children where there are allegations of sexual abuse, or abuse, or neglect more broadly, where the family law system has been leaving children in positions of risk because it’s saying that it’s not its job to investigate this stuff. But it’s the family law system’s job to be able to protect children.

Natalie Campbell:
Thank you for listening to this episode of the ADM+S podcast. You can watch the full event recording on our YouTube channel.

SEE ALSO