ADM in Child and Family Services: Mapping what is happening and what we know
12 February 2021
Dr Joanna Redden, Western University, Canada
Assoc Prof Philip Gillingham, University of Queensland
Prof Rhema Vaithianathan, Auckland University of Technology & UQ
Carol Ronken, Director of Research, Bravehearts Australia
Watch the recording
Prof Paul Henman:
Welcome to this recording of an event held on the 24th of November 2020. This event was held to examine and map automated decision making in child and family services. It represents one of a number of similar mapping events that are and have been held to understand how automated decision-making is being used in a range of social services. Other events will include ADM in social security in disability services, and in criminal justice systems. I hope you enjoy this event and find it stimulating and a very good opening to some of the key issues in understanding how ADM is being used in child protection and family services. I’m Paul Henman. I’m based at the University of Queensland, who is hosting the event on behalf of the ARC Centre of Excellence for Automated Decision-Making and Society. And I want to acknowledge, in particular, Professor Julian Thomas, the Centre Director based at RMIT, who’s with us today.
Prof Julian Thomas:
Prof Paul Henman:
First of all, I want to acknowledge the traditional owners on which the University of Queensland, where some of us are, stands. The Jagera and the Turrbal people. We pay our respects to their ancestors, their descendants, who continue the cultural and spiritual connections to our country.
So, today’s is a first of four events related to the social services part of the automated decision decision-making and society Centre of Excellence. I wish to acknowledge that some of our participants are very familiar with the Centre and some of our participants are new and contributing to this from a broader range of the world, and a broader range of the topic area. And we really appreciate this dialogue. This 90 minute to two hours discussion is aimed at, as a more of a dialogical component, but part of the process of trying to get a sense of what automated decision-making is happening and what it means in the areas of child and family services. To do that we will have three international guests and contributors. From early career, Joanna Redden. So welcome Joanna from Ontario in Canada from the Western university and recently at the university of Cardiff. We have associate professor Philip Gillingham. Welcome Phillip, just down the road or around the corner at the university of Queensland. And we also have, joining us from Auckland, professor Rhema, and Rhema can you pronounce your name please?
Vaithianathan. Thank you. I didn’t misconstrue it completely. So each of our international speakers, are academics who have researched and worked in this field of automated decision making and child and family services. And they’ll give us 15 minutes input into their areas of research, or their areas of knowledge. And I’ll introduce them each separately. We’ve also managed to get one person, Carol Ronken, so welcome Carol. And Carol is from the organisation Bravehearts, which is probably well known to lots of people within Australia. Carol is the Research Director and will be able to give some reflections about what working in child and family services might mean, and what automated decision making might mean in that space. And then we want to have, we’ll be having a roundtable discussion about identifying the challenges – the issues – that we might want to explore further in a research area. And also particularly what the practitioners and policy makers need to know, or want to know further in this rapidly changing and evolving. I also want to acknowledge some of our attendees from government. Christine Lo from the New South Wales department of communities and justice. Rouel Dayoan, can you pronounce that Rouel?
It’s Rouel Dayoan, pretty good.
Prof Paul Henman:
Thank you. So Rouel’s from the commission of children and young people in Victoria. We also have Beth Parker who is at the department of health and human services, Victoria. and Zara Berkovits from Queensland family and children’s commission. We also have a number of attendees from the non-government organisations. Alexander Ruiz and Sophie Mackey, who are both from the Australian Red Cross, and Red Cross is a partner organisation of the ARC Centre of Excellence. So welcome to both of you. Lindsay Wegener, is that how? A V or a W?
Wegener, that’s fine.
Prof Paul Henman:
Wegener. It’s my German from high school. Always wants to pronounce the W as a V. So Lindsay’s from Peak Care. A peak organisation advocating for child and family services in Queensland. And Susie Edwards, from family inclusion network in southeast Queensland. We also have various members, Chief Investigators, Associate Investigators, and other people from the Centre. And I won’t go through all of them because in a sense, so many of us know each other from that, but it’s important for you, to both our visitors to the Centre, to be aware of who we are, but also for the Centre people to be aware of our really welcomed visitors to this event. So as I mentioned. The ARC Centre of excellence has a number of focus areas, and this is a message really to our visitors, is the focus areas are substantive areas in which our research will focus over the seven years of the Centre, one of those is the social services focus area. And I’m one of the leading organisers of that, coordinating that focus area. And in part of the first 12 months of the centre, we’ll be having four different workshops like this one. This is the first of a series. These are designed to build a baseline knowledge-base to inform our research programs, our engagement and our impact. And these are really important things for engaging with society, that the interface between research and academic knowledge creation, in its translation into the world. And that’s very much what the Australian Government pays for in terms of its investment. That it does have an impact in the world. This is an interactive workshop and each of our participants, our three keynote participants, will be contributing towards a workshop report that we will be able to use as a baseline for building our knowledge-base within the Centre. These social services focus areas, including beyond this particular event, include social security and income support, criminal justice, and disability services. So what does social child and family services mean, and what does automated decision making mean?
So I think, just to get a sense of shared knowledge of where we’re coming from, I guess, that child and family services can cover a wide range of services. But typically often people think of child protection focusing on mainly identification and management of child abuse and neglect, including how out-of-home care can also include parenting, education and training, and family support, very much. Strongly, a sector that has a lot of human service professionals and both a contribution state and non-state actors, particularly in Australia and the UK. Not so much state actors, necessarily in some other parts. And in Australia, of course, it’s a state responsibility that has both service delivery, but also increasingly advocacy like the children’s commissioners and children’s rights locations within government. In all of the three eastern country states that we have, sorry, not Tasmania; Victoria Queensland and New South Wales.
On the other hand, automated decision making, when we think about it, is the use of digital technologies and algorithms to automate all, or parts of, human decision making and in areas of child and family services. The risk assessment has been a very important part of that, and that risk assessment is very much focused on decision support, skills, tools, and a long-standing, registered process of structured decision making. But there are obviously other areas that we maybe think, that automated decision making may become part of or may be seen to be helpful in the area of child and family services. And that’s something I would like to throw up to our discussion at the end of today’s discussion. So before we start, for our three presentations from our researchers, is there any questions that people have that they might wish to ask or any clarifications about how we’re proceeding?
Okay so I want to start today with Joanna Redden. Joanna is Assistant Professor in the faculty of information, media studies. She’s also a co-director of the data justice lab and is author/editor of two books; one called Compromised Data from Social Media and Big Data, and the other one is the Mediation of Poverty. Joanna, when I met her a few years ago in Montreal, was part of the data justice lab who did undertaking research on scores within the UK, and the local authorities. And I welcome Joanna, thank you for coming and joining us from your snowy part of the world. So i’ll pass it on to you for your presentation.
Great, thank you. And I guess i’ll just indicate by saying slide, when it’s time to change the slide, is that how we work? Excellent, okay. Thanks very much for the invitation. If you can go to the next slide.
The research that I’m going to draw on today comes from a few projects. Mainly the data scores governance project which Paul mentioned, we were presenting on in Montreal and that project concluded. The research was published in 2018 and academic articles followed from the publication of the report, and that project was really focused on mapping and analysing how risk assessment or different kinds of scoring systems were being implemented by local authorities in the UK. Since then we’ve had a follow-on project that we’ve been working on, which has been looking at civic participation. So how to advance civic participation with the ways that governments are making use of data systems, and then more recently I’ve been looking at with my colleagues, where and how government agencies are piloting and then deciding to cancel their use of different kinds of data systems. And what we can learn from that – so next slide please.
So the data scores as governance project is a team project. These were the team members that worked on this project. Next slide please.
The project involved a range of methods. So we held multi-stakeholder workshops with practitioners, with members of civil society organisations and community organisations, as well as academics. We did desk research. So this involved automated searches which we modelled this on the algorithm tips project in the United States, where we scraped a range of government websites for documents. We submitted 423 freedom of information requests to try to get more information about what was happening and where, and then we did more detailed case study investigations looking at six different types of data systems in use by local authorities. We also did interviews with public officials and practitioners, as well as with members of civil society organisations – so 27 interviews. And we built what we call the data scores investigative tool in order to share the information that we gained in our documents with others, in the hope of advancing more debate and investigation in this area. And we did work with journalists to try to better understand how to create a tool that would be useful for them. Next slide please. And I should just say a more detailed overview of our methods is available in the report, which is published on our website. So this slide didn’t come out exactly as I wanted it to, but these are our six case studies for the purposes of the the talk that i’m doing today. I’m going to be talking in particular about uses of data systems as related to child welfare. So i’m going to be focusing on the case studies, these first three case studies. Bristol’s integrated analytical hub and the hackney system, and then Manchester’s research and intelligence database. Next slide.
If you’re interested in our documents, the freedom of information requests that we collected as well as the other documents that we got through scraping government websites, you can access the data scores tool here and take a look at some of these documents yourself. Next slide please.
So, in terms of general findings we found that applications were being used across different areas of government services. So child welfare, social care policing, and fraud detection. We identified the development of data warehouses and expanded data sharing arrangements in order to enable greater uses of data. We identified different kinds of systems, so systems that were being used for population level analytics, for future planning and resource allocation, to systems that were developed to risk assess populations or to risk assess families and individuals. We found an ongoing tension, speaking quite broadly, between the needs and the intentions of local authorities and councils to do more with less, and the rights and democratic principles that were being raised by civil society organisations who were concerned about increasing data uses across different areas. We found quite broadly speaking, that applications and levels of transparency were highly context dependent. We found a range of different approaches to making use of data systems, from public-private partnerships to in-house development, to systems being purchased off the shelf and then implemented for a local authority’s own needs.
Overall and across the board, we didn’t find much effort to measure the impact of the systems on frontline staff, on resource and service allocation, or on service users. Next slide please.
In looking at the way in which these systems are being used for child welfare, we set out to advance a grounded and a situated analysis of the politics of data systems. We wanted to consider data systems as complex assemblages of artefacts, people programs, infrastructures and ideas, as a means to consider how these systems are socially constructed, while also taking them seriously as technical artefacts in their own right. Next slide please.
This is a wall of texts, I recognise that, but I just wanted to give you an indication of the analytical framework that we were following. This is very much informed by Rob Kitchin’s data assemblage framework, and we use this analytical framework in order to look at different aspects of the systems that we were analysing. So we wanted to situate these systems in relation to systems of thought. We wanted to attend to the political economy of the systems. What kind of policies were influencing the kinds of systems being introduced? We wanted to consider the marketplace, the data marketplace that these systems were part of. We wanted to consider finance, we wanted to consider governmentalities and legalities, particularly regulations and laws that were influencing what was and what wasn’t being done, and how it was being done. We were interested in forms of knowledge. So we wanted to, as much as possible, get a sense of the knowledge outputs that were being produced as a result of the kinds of systems. We were looking at how those outputs were being shared. We wanted to know the extent that we could get a better understanding of practices. We wanted to understand what materialities and infrastructures, and we wanted to understand communities and places, and how these were influencing the systems that were being implemented, and how they were being implemented. And this whole framework as I mentioned is taken from Rob Kitchin’s book Data Revolution. Next slide please.
Now I will stay on this slide for the remainder of the talk, and I’ve got a timer on to keep track of myself here, so I’ve got about seven minutes. And what I wanted to do in this time that I have, is I wanted to talk through some of the findings. And so what we found in the UK, and particularly in the systems that we were looking at, the systems that were focused on child welfare – and i’ll just go back here. I just wanted to describe to you really briefly here, the difference in some of these systems.
So as I mentioned, we were looking at Bristol’s integrated analytical hub, and this analytical hub was a system developed in-house to make use of a database that consolidates 35 social issue data sets of about 54 000 families. The hub was created initially as a data warehouse in response to the troubled families program in an effort to try to provide a holistic understanding of the family. Developers wanted to develop a more strategic understanding of the city and the challenges that were facing families, in order to make better decisions and better understand where risk and vulnerability was. After the development of the data warehouse, the team began looking into different ways to use that data in order to predict future needs, and they created a model for predicting child sexual exploitation.
The hackney early help profiling system is another predictive system. In this case, the London borough of hackney worked with Ernst Young in a company called Zentura and they trialled a program called the early health profiling system. The system was used to try to identify children at risk of abuse and neglect, and it brought together data from multiple agencies. The idea was to develop a system that would send monthly risk profiles to social workers for those families identified, as most in need of intervention. Manchester’s research and intelligence database at the time that we were looking at, it was not predictive it was not a predictive system as such, in the way that the others were. This system was really developed to try to combine data sets about families in order to identify those that could be categorised to meet the troubled families program in the UK, and the aim of the system was to make it possible for workers to access data more quickly and to make the best use of data. They were legally able to see this data warehouse that was created, combined 16 data sets, and caseworkers were able to access data going back five years. As with the Bristol system, the aim was to enable a more holistic understanding of people’s needs and services.
Okay, so, if I focus here on systems of thought, what we found is that local authorities were turning to these systems largely because of the austerity program introduced by the conservative government post-financial crash, which saw some local authorities in England see their budgets cut by as much as a third. As a result, local authorities were trying to help more people with less, and these systems reviewed as a way to provide more targeted services. I already mentioned briefly in terms of policy, the troubled family’s program was a driver for the implementation of these systems. The program was introduced in 2012 with funds set aside for local authorities that were able to identify families that were labelled as troubled. The term itself is problematic, but families were labelled as troubled based on a set criteria. The goal was to enable more direct interventions, and local authorities were compelled to do this because being able to categorise families as troubled could lead to more resources, which were desperately needed in terms of governmentalities. I should say, rights organisations and civil society organisations raise concerns about this kind of labelling and in-datafied landscapes, because of how easy it is for labels to stick with people and become amplified in ways that can affect opportunities. And we’ve seen this with different kinds of other labelling systems used to risk-score groups of people, like the gain matrix in the UK.
Concerns have also been raised about the extent to which this system individualises social problems by stigmatising families, and directs attention away from the wider social contacts that lead to family crises in terms of governmentalities and legalities. We found that a duty of care was used as the main justification for using predictive risk assessments, but that this was being challenged by some who are arguing about greater debate, about rights and whether or not there are adequate protections of people’s rights whose data is caught up in these systems. Particularly since the potential harms of being labelled or being risk assessed weren’t being investigated. Though we also identified through the different applications and the different ways that local authorities went about engaging with consent. For example, Manchester actively sought consent from those whose data was used in their system, whereas in Hackney there was a decision not to seek consent because of the way that it was perceived, that might compromise the effectiveness of the system. And so what we found was that there are differing attitudes about the need for subject consent which suggests divided opinion in these areas.
In terms of the datafied marketplace, through the documents that, we were able to demonstrate a range of public/private involvement in government data practices, and a growing data marketplace that requires attention.
Each local authority that we looked at had differing ideas about what was appropriate in terms of involving private companies, which suggests that there are ongoing divisions about this among local authorities. And we also found this in civil society as well, which demonstrates the need for more widespread debate about what people think is the appropriate role for private company involvement in the area of social care. So, for example, Bristol city council deliberately decided to develop their system in-house to maintain complete control, and Manchester decided in their implementation, to buy a system off the shelf, but then to not engage in further data sharing arrangements with that company. But to make use of the system as they saw best.
Hackney on the other hand, while they do have their own data analysts, they decided to to contract Zentura to provide support. I’ve got just under a minute here left, but i’ll note that there were differing levels of transparency, accountability, and oversight, which demonstrates just how difficult it can be to gain information about what’s happening, how it’s happening, and what challenges local authorities are experiencing as they try to make use of these systems. And in other research that we’re doing, we’re arguing there’s actually a lot of advantage to be gained from more open and transparent conversations about the challenges that are being faced, by making use of data systems, particularly predictive systems. We found when it comes to predictive systems, we lack proper knowledge of how data systems impact on resource allocations and actions taken, and how changes and practices have impacted on families and children.
A survey that we’re doing at government agencies who have paused or cancelled uses of automated decision-making systems, demonstrates that other government agencies are also cancelling the use of similar systems. But again, little information is publicly available in terms of materiality’s, infrastructures and practices. We found that there’s an ongoing use of the development of data lakes, increasing data sharing, and we’re seeing that there are new information systems being created that are putting a range of pressures on social workers, which I think is what might be something that others are going to be touching on later on. So, I won’t go into that too much. But there are concerns about how this affects the ability of frontline workers to engage professionally, and this was a concern that was raised to us by members of the British association of social workers. And with that, I will end. Because I’m over time here, but i’ll just note that if you want to go to the next slide – maybe this is something we get to in our discussion – but there are a whole range of suggestions that are being put forward by those from across sectors, about how we increase transparency, accountability, civic participation, and means for meaningful citizen engagement and intervention where these systems are being implemented. So, with that I’ll conclude and say thank you. And I look forward to the rest of the discussion.
Prof Paul Henman:
Thank you very much Joanna, for a very insightful and really helpful overview of what has happened in the UK, and covering that great variability that is really important for us to help think through some of the ways in which technology is mobilised in different ways, and with different outcomes. So, I will hold off for questions and comments, probably to the end because I think rima has a limited time frame for us today. Are you able to stay with us Joanna? Yes, thank you.
Now there’s Joanna’s contact details, should you wish to contact her directly.
Our next speaker, associate professor Philip Gillingham, from the University of Queensland. He is an Australian research council future fellow and a former ARC Decora Fellow holder, as well. Philip, let’s see, there’s a chat. Phillip is based at – whoops sorry. Philip is based in the school of nursing, midwifery and social work, at the University of Queensland. Phillip’s work, I think, started as a social worker in both the UK and then in Australia in child protection areas. And then started his PhD looking at the structured decision-making system in the state of Queensland. And has since undertaken research in Australia, New Zealand and Europe, around the use of decision support technologies in child and family services. So Philip is going to give us a broad understanding of what’s happening, but also what this means for practitioners and practitioner organisations. So over to you Phillip.
Prof Philip Gillingham:
Thank you, Paul. We have the next slide please.
On the next one, I think Paul’s already covered some of this, but I did work as a qualified social worker in child protection services for 16 years. So all that practice background, which makes me think a lot about the practitioners, but also about service users. As Paul said, my PhD was all about the structured decision-making tools in Queensland, where they’ve been rolled out. Then I went on to do discovery early career research award, which was focused on how we can improve electronic information systems, because certainly at the time, people weren’t finding them very helpful. They were just burdened to have to fill in all their fields, and so on and so forth. Next slide please.
Then, yeah, currently a future fellow extending the previous research but also looking more at what people are actually doing with the data that we’re collating in these systems, which is what’s brought me to ADM. And as Paul said, I’m an associate investigator at the Centre of Excellence. I’ll just say before we get into the real stuff that this is a very scanty sort of presentation page journalism. What I have done is written quite a detailed document trying to pull this together, which i think Paul’s going to distribute after we’ve had the meeting and decide after today – decide what we’re going to do. So, you can see I’ve sort of dipped my toe a bit into this area of algorithmic decision making, or algorithm supports recommendations and so on and so forth, just to sort of see what’s going on in this area. So, I won’t talk too much about that because it’s covered in the written document. Next slide please.
So, where is ADM being used or touted to be used to child and family services. I mean obviously you refer to the journal about what’s going on in England in particular, but there’s a predictive risk model in New Zealand. I mean, that’s there in blue as in Amsterdam, because they never actually got implemented. Hackney as Joanna’s already talked about, that’s red because it stopped due to a public outcry over privacy. A system they had in Chicago, that worked for a few months and then started making some really strange predictions about who was at risk, who wasn’t at risk, so it was discontinued. But unfortunately, this is a point that comes up in this work all the time. There’s no transparency about any of that. It’s very hard to find any information, partly because it was a private company that developed the algorithm. So, they don’t want financial mistake in the whole thing. Next one please.
Now, in terms of how it’s being used, it’s hard to say because of all the secrecy, lack of transparency. But Johnson Peterson interestingly enough, says it could be used for all of the above, and he talks about actually getting rid of social workers and using these predictive things to match clients to services. And that’s all we do. But this gross underestimation of what social workers actually do, I mean Joanna has already talked about the vulnerable families initiative which I think has wrapped up last year, possibly this year. A lot of it is about risk assessment and giving people scores, as Joanna said. But also, it’s about using proxies for child maltreatment. So, what can people hold in on as something we can predict, like re-notification or re-substantiation, and so on and so forth. And just recently, what works centre for social chair in London undertook about eight months projects with four local authorities, to try and develop predictive models and summing it all up. It was none other than what i’ll say they could produce, considered to be sufficiently accurate in their predictions, to be of any sort of practical use for anyone. And I was very surprised by that because I tried to follow their process and meet with them last year about all of this, and I thought if anybody can do something with this stuff, it would be them. But apparently not. Next slide please.
When – Joanna’s already touched on this, I’m sure Rhema’s got some comments as well, but when you look into the actual regulations in the European Union, and the data protection act in UK, they don’t prevent limit, or limit rather, the sharing information. As long as it’s done for the purposes of keeping children and young people safe. That obviously needs interpretation and I think what Joanna’s saying, you know, there’s different ways to interpret that around the country. There’s quite a few ethical studies or studies about ethics coming out, or have been out, and the usual sort of problems are the same everywhere, saying yeah privacy – obviously information sharing – but particularly also using data for a different purpose than it was collected for. And so I said, there’s some very different approaches to actually getting people’s consent or just hiding it from them. So they don’t know the other problem is accountability. I mean how do we explain the recommendations that are made by ADM when we get challenged? I mean certainly, service users i’ve got a challenge, but also I spent a lot of my time in court. We would have to justify every decision, but if we’re using ADM, we’d have to explain how that works and how it’s picked up on these different things within this case to give it high risk. And we have to intervene. The other challenge is the data that people are using to develop ADM. There’s always problems with reliability and accuracy. I’ve done a lot of firewalls into myself and you can get halfway through this nothing, so it’s not complete necessarily, and it’s a bigger question really – is it representative of practice? And i’ll just say no, it isn’t. It’s just an administrative or a record of administrative processes. And I think the bigger picture if we’re going to move forward with ADM very carefully, about what information we’re recording to actually assist that process. Because just being able to predict an administrative process like substantiation, isn’t as useful a that actually sounds from the outside. Next slide please.
There isn’t a lot of research about how professionals and administrators engage with ADM. My research with the STM tools was fairly close, but it was 13 years ago, so we need to have another look at that. Certainly, the SDM tools were embedded in the information system that protective workers were using. So they couldn’t avoid them. But then again they didn’t use them in their decision making as such. And there is the consultants as well, but in Philadelphia, one of the papers in 2017 – I haven’t put references in this written document – but up to 25 percent of recommendations when the family supports, were being ignored.
Now I didn’t qualify for my study of structural decision making because I didn’t have all the doubts. But I would say it was slightly more than 25 percent of the time, in the areas that I looked at, and the people I spoke to, where they didn’t find the recommendations. So we need much more research in this area. It’s not something that we can just impose upon social work for transportation workers, it has to be useful for them, otherwise they’re not going to use it. Next slide.
Going from a service user’s point of view, very little research of Virginia Eubanks’ has been about how people are very concerned about previous involvement with services and uses of predictor. And part of the problem having worked in child protection, is that we used to get quite a few malicious reports- you know, neighbours having him, don’t know the neighbours, but it’s also what does previous involvement actually look like? I mean, there could be lots of cases where the decision to investigate was taken and said well now we’re not going to investigate. If that happens, you know as a parent, you would never know that happened. And that came five or six times and they still don’t come and see you. But it’s building up. If they’re using a predictor, it’s going to get to a point where your put in some sort of threshold, according to the ADM, and they’re going to be paying you a visit. Next slide please.
So what data research knowledge are used? Well, it already talks a bit about the limitations of administrative data and that’s what we’re having to, you know, people are having to use at the moment, because that’s what’s there. But there’s an argument for saying, okay see, look at the research. What are the predictors of treatment? Not within administrative systems but in the general population. And trying to sort of hone in on some factors that we know from research, are actually causative rather than just correlated. Because sometimes the correlations can be a bit misleading. It might lead back to price involvement and indications of policy, and how many people does it – we’re going to get it to differentiate between people. We need to focus on the other people, so some of them say it’s about poverty, single parenthood. So, there’s a lot of people like that in society who don’t use their children. So, it’s only telling us what we already know. And another piece, or just an observation really, from a study that Tim was doing over at ANU, that they played around with different predictors. But they found that most of them didn’t really affect the accuracy. So, what he’s arguing is there’s usually two or three predictions that he predicts, the rest of them, there’s some correlation there but not really strong enough to make any difference. Whether you put it in or you don’t. So claims that say oh you know, we use 30 different data points. Well yeah, but show me what correlations are, how many measures, how can you quantify it, isn’t a question but it comes back to the transparency argument. Because people aren’t necessarily transparent about what we’re doing. Next slide please.
So, just to finish off, I mean some of the questions that I formed in my mind since I started doing this work a few years ago; Why do we need ADM in child protection services? I mean, what are we trying to improve? We know that decisions are subjective and biased and so on and so forth when it comes to really serious cases, that’s usually mitigated by involving a whole team of people. So, why do we need it? I mean, what is it going to give us over and above what we’ve already got? Will ADM ever be as accurate as practitioners? I mean, bear in mind that at the moment we’re having to use data sets that were created by practitioners and people saying what, 60, 70, 80 percent accurate? Well, that’s not as accurate as us because you’re saying that the data you’re using is actually 100 percent, and again that might be very misleading as well, because outcomes frequently aren’t recorded. There’s also an argument that this applies to structured decision making as well, and so we need to check as we roll out ADM’s, do these things oversimplify the complexities involved with dealing with child maltreatment? And there’s an obviously yes, they do. I just need graphic work. I watch what I can see, I go around with them, I know what they’re doing. Anyway, when you then look at the file, none of the complexities that are dealing with the tensions that are actually in the file necessarily – also with the structured decision-making stuff – after people have been using it for about a year they sort of got a bit weary of it. And one of the comments was that it just states the obvious, and the cases they really struggle with, that’s what we alluded to, is the extreme cases. So yeah, babies with fractions, and so on and so forth. They’re the really hard cases that things like ADM’s just wouldn’t necessarily, and it’s about what information forgot. What comes in this situation, how we get more information, more expert opinion on this, and so on and so forth. So, strategic marking tools for one, weren’t telling really anything because they could predict what score they’re going to get anyway. So, they could manipulate it if they wanted to. In fact, we did find one office that were doing that. So, you know, so much to do.
The other bigger picture question for me really, is however we might use these things. But bear in mind, decision making is really only a very small part of dealing with childbirth treatment and it’s not as if we make decisions that are going to stick forever.
Frequently, when you’re doing an investigation, new information is coming in every day that has to be – and you might have changed the decision you made yesterday, so it’s a management reputation. That’s what I did. That’s today’s decision. We’ll see what happens for tomorrow. A lot of uncertainty. But it’s fluid. It’s not like you make this decision and that’s how it’s always going to be. So, is ADM worth the investment? And again, you need a bit more transparency about how much these things cost and how much money we have to put into them, and sort of balance that against what we are going to achieve. But it’s still questions, things that we need to look at as we go through. So that’s it for me today. Thank you.
Prof Paul Henman:
Thank you very much, Philip, for that that wonderful overview of the area, empirically, and also raising those helpful provocations. So, I think this is important to hand over to Rhema.
Rhema’s work is internationally recognised, but particularly your work in the Allegheny County. And both Joanna and Philip have been studying how people have used, or where people have used ADM, but Rhema brings to the discussion her expertise in actually building and developing decision support tools within organisations, and I think it’s important to recognise that Rhema’s work is probably one of the most transparent that I’ve seen in this space. And for which, your organisation needs to be congratulated. So, Rhema, I think you’ll also be speaking a bit about your work in California, so I’ll hand it over to you.
Prof Rhema Vaithianathan:
Yes, screen please. Thanks, you need to share. Excellent. You need to either make me a host or allow me to share my screen.
Prof Paul Henman:
Lata, can you allow allow…
I think Rhema can share it. Yeah you can share it.
Prof Rhema Vaithianathan:
Yeah, all right, thank you. Thanks Philip and Joanna, I think that was really helpful for my 15 minutes. So, I’m just going to do a little skin off the issues, and then go through one of our case studies which is actually not from Allegheny but from Colorado, and talk a little bit about the guide rail. So, just to be clear, even though this session is called ADM automated decision models, none of the tools that I deploy, our centre deploys, replaces decision making. They are really only ever a decision support tool. So they always have what we call a human in the loop. Now, with these sorts of tools, there is no intention to automate them.
So, let me start with what is predictive risk models and the kind that we work with. There’s three sort of features about them that’s different to people who are used to more structured decision models, where a caseworker might fill in fields and that work that Philip was talking about in Queensland, you know, where people said they’re stating the obvious. Well it’s not that they’re stating the obvious, except that caseworker who is being asked to make a decision is filling in the fields from their own observations and their own judgment. So, in a way it is taking what the case worker knows and putting it down on paper, weighting them maybe. And then developing a risk assessment. So the tools that predictive risk model – think of it as the same sort of thing but rather than using the caseworker’s own form-filling, it goes into data systems. Often just the child welfare sequence or serious system grabs the history of each of the people on the case, and then kind of pre-populates those fields and uses those weights to try to generate a risk of adverse harm. So, what can PRM do? Well, the agencies that I work with and most of my workers in the US are looking at it, maybe three US cases at the moment, trying to use these tools to support caseworker decision making. So using a risk score to help them in their assessment of risk and safety, I provide an indicator of case complexity. The scores can often actually say, because they draw on complex system interactions of the people on the case, it can allow supervisors and others to know how much collateral they should expect from a case. Because we know that one of the real challenges for a lot of our case workers is that they have to go and look for the history. And to complete those history, it’s useful to know- should I be seeing a lot of history as a supervisor? When you’ve got a junior caseworker who is going out to do the site visits, you need to know whether they really ought to be coming back with a lot more information, a lot of collaterals to contact, or whether there is much fewer. So these are ways that these tools are used and of course at the end of the day, it does help both of the men, both at the policy level, the decision making level, because it sort of summarises a large amount of data into a single index. So what is the motivation? There’s a lot of different ways I could frame the motivation but one of the concerns that a lot of the agencies we work with have is well explained by Brian Taylor, who talks about the fact that whenever we have a tragedy in child welfare, whenever a child dies, when we go back and look at the challenges and what was happening, what we often end up with is asking organisations to gather more and more and more information. So, they tend to really emphasise, and i’m sorry this is covering the quote, but what he’s pointing out is that we emphasise the gathering of information but not the processing of that information. And we leave that very much up to what we call professional discretion.
Now what tends to happen when there is a vast amount of data, we get into what’s called information overload. So a caseworker who’s working with a child who might have had loads of referrals, removals, a complex family genogram, they might be working with hundreds of different interactions and case history. And when we tend to have such an overwhelming amount of case history, what we tend to do is use mental shortcuts or heuristics. And what a heuristic is, is a strategy that ignores part of the information with the goal of making the decision more accurately, quickly, and frugally. So, what this type of mental shortcut is, is a response to being asked to make decisions when you don’t have enough time, when you’re overwhelmed with information. And whilst heuristics – and Brian makes it very clear, heuristics are a useful method for decision making in child welfare – we also have really amazing work by Jennifer Eberhardt at Stanford, looking at policing decisions. Really trying to dig into how the overwhelming sort of, the unconscious bias works, when we do have situations where we are have a lot of ambiguity, overwhelmed by data and using our own intuition. And some of these decisions that are being made by frontline staff are in fact quite available for these sorts of unconscious bias and other kinds of heuristic things to take over. So, heuristics can lead to some unfortunate consequences.
One of the places that we’ve been working on a lot, is at the front door of child welfare. One of the reasons we’re really interested in that is because we know that as concerns of child welfare increase, as child maltreatment gets into the front page of our newspapers, what our communities tend to do is just start referring more and more children into child welfare. So, if you look at the data in the US, one in three American children will be investigated for abuse and neglect before they turn 18. and for African-American and black children, that’s one in two. So, if you imagine a community where one in two is having the kind of invasive investigation for abuse and neglect that they are being subject to, you can understand why those thoughts of very invasive systems are not going to achieve a lot. What is happening is our child welfare system, that was really designed for a kind of minimal touch to look for severe abuse and neglect, are now becoming much more involved in families lives. And one of the reasons is that, at the front end we have provided very few tools for people to make the screening decision.
The screening decision is a decision that occurs when the allegation of abuse and neglect occur, and a lot of the agencies that we work with, these decisions are being made within 10 minutes. The average time spent on whether to screen and investigate a child or family is 10 minutes. And so a huge amount of data about previous interactions that family might have had. So, that’s the case study I’m going to go through, which is the case study of decision support tool for call screeners. Other user cases that we’re working with counties on how we can use these fact, that these scores allow you to compare life with, and in almost every county we work with, we see a racial disparity in how children with similar scores are being treated. It gives us an opportunity to do what we call a race equity feedback, to try to look at systematic things that might be happening that mean a child with a similar risk profile who is black, is being treated or investigated more commonly than a child who’s white with the same profile. And the third one is trying to help focus more supervisor attention on complex cases. I’ll go into a little bit how that works.
So, the case study that I wanted to share with you is called the Douglas County decision aid. It’s being used right now in Douglas County, Colorado. When a call comes in they put the referral ID in to the top of the box and they get a score from 1 to 20, and their score essentially tells them the chance that the child will be removed within two years of that call. Now we always share with them that sort of static picture, which is really a kind of measure of the accuracy, and whenever I give this talk I remind them that this tool is not 100 percent accurate, because only 50 of the children who score at 20 go on to be removed in the next two years. And there are some children who score a one, who do go on to be removed. So, this is not an accurate tool. But to be honest, when we came in to most agencies we work with and look at the remove, at the screening decisions across these scores, they’re almost flat. That is, if you looked back and scored the children who they were making decisions about a year ago or two years ago, they were overwhelmingly screening in lots of children who had very minimal chance of having any chronic involvement with child welfare and screening out a lot of children with very high risk.
So, the way the Douglas county decision tool works is that when the call comes in, of course it first has to populate with all the children and other people who are associated with the call. The tool then goes immediately and populates the victim, the sibling, other children, parents perpetrate and other adults. And for each of those roles it looks at its demographics, its prior referral history, prior placement, profound inconclusive allegations, what the reasons for different allegations were, program involvement, program denial, prior sanctions in the public, benefit history, juvenile justice history, and doing that, what we’ve done is what’s called training a model. Which is, we take all of those features and combine them using a statistical method, in this case using lasso to create a score. And the score is really validated and correlated to that particular population. So, unlike a lot of the actuarial and SDM tools that are in them, available, we actually build it for the population who we are predicting for. And in this case, we’re showing you the PPV curve. The number of children who get removed as a result of getting a score of 20 versus one. And we’ve done this by taking what we call a holdout set – a set of children and cases that were not used for the building or training of the tool – and we’ve gone and applied the tool to those children, because they were historical cases. We could follow them for two years and then verify whether the children were being removed at the rates that we had expected.
And in fact, they were. The predictive power is similar across racial groups, so we compare Hispanic and non-Hispanic, and black and non-black, and see similar profiles in this case. We’ve also looked at the correlation between fatalities. So we looked at children who came in with that, it grew what’s called an egregious referral or a fatal or near fatal referral, and gone back and looked at the score that they would have received prior to that referral. And as you can see it’s not perfect because here’s a child who scored a two who came in, who only scored a two. He is a child who’s caught a far cayman who scored a five, but 13 of the children scored a 20, and we’ve done this in California for the whole state and shown that about 65 percent of the children who ended up with a fatal outcome, in fatal or near fatality. Because the big state, we can do, the numbers are reasonably sized. 65-70 percent of them are in this higher group. So, of course these tools are very predictive. And all I’ve told you is that yes they do seem to provide some insight into children who are both reasonably less likely to have severe maltreatment events happen to them, and you’re more likely to question, really, for all of us, is even though we know we can do it, should we do it? And that’s where we have, at our centre, developed what we call guardrails. We had to develop these because when we started working in this area there was no discussion really, of ethics transparency or anything. Subsequently, groups like Casey family foundation and any Casey and Nesta in the UK, have all come out with different guidelines on using these sorts of tools. And we do want to adapt some of those to our work, but this is kind of going back now for five years since we started. And we sort of gave ourselves six main what we call guard rails, in how we go about doing this. The first is we’re really focused on agency leadership. So, there’s me with a meeting in Colorado, but we really believe that these tools that agencies should be in the driving seat of these tools, often agencies do these quite complicated techniques, and so on. And really, I think one of the things we really focus on is educating agencies on how to purchase these tools, not to accept that just because they’re complicated they shouldn’t be in the driver’s seat and control how these tools are purchased.
So, I think there needs to be a lot of attention on how agencies write RFPS, how they purchase, what the expectations are, and in a way, the work we’ve been doing is trying to model what you ought to expect from your vendor when you go out and purchase these tools.
We also really focus a lot on multidisciplinary teams. So, we have people working in ethics and fairness and disparities evaluation. We always try to make sure that there is an evaluation component which is independent of our group, so that it’s someone who has another look at it. And that’s because there is a bit of business process re-engineering as well. We think those are really important with transparency. We publish all our methodology documents online. The features that are used, how we built it. We publish an ethics report which is independent ethics advice to the agency, the response of the agency is all available. For example, in Allegheny analytics website, and Bob Brones and Alan Goodman wrote a piece on the use of algorithms in government in the US and found that our work was, if not completely open source, as close as they’ve seen to open source and transparent use of these sorts of tools.
With fairness, one of the biggest concerns we have is surveillance bias in child welfare data systems. We do not have ground truth child maltreatment; we have what is reported to our system. So, what that means is, what we have is a confluence of what we both know is surveillance. The ability of people to observe child maltreatment, and the risk of maltreatment. So, one of the things that we always try to do is look for ground truth universal measures of adversity, and what we do is what we call external validation. So, fatality is one of them. Maltreatment fatality doesn’t have that much surveillance bias. The child is subject to a death or near death, that they get reviewed usually separately and they are part of a much more rigorous set of questions. So, we know that data is pretty good. But that data is quite small. So, in the Pittsburgh case, if anyone’s interested, justice came out last month I think, where we looked at a universal hospitalisation data set of maltreatment, injury hospitalisation suicide, and self-harm, and showed that the Pittsburgh model was highly cold related to those sort of universal measures of hospitalisation for self-harm, but not hospitalisation for cancer. Which is the thing you see on the bottom right.
We also, as I said, we either have ethical reviews or do rely on emphasis. So, the ethical review for the call screening tool relied on work by Tinder, who’s actually an ethicist at the University of Auckland, and Eileen Gambrell, who’s a kind of senior social work professor who was quite sceptical about the use of these tools at all. And they produced an assessment, an overall assessment. So, their assessment is that in subject to the recommendations in their report, the implementation of the Allegheny family screening tool, which is the tool that we first implemented in Pittsburgh, is ethically appropriate. And in fact, they kind of went even further and said, given the level to which you can predict, and given that there is no complete paucity and absence of research includes a call screening, that they are significant ethical issues in not using the most accurate risk prediction measure. Now the other piece we are very concerned about is community-wise. So, we work with participatory design profession experts who’ve been, you know, whilst the agency leads the community engagement and we have a lot of community engagement in our projects, we’ve also got a stream of research in trying to take things like participatory design and understand what people concerns are about the use of their data in controversial settings, like in child welfare. So, there’s a paper that came out last year that we wrote on the use of participatory design methods in algorithm in child welfare, where we compared attitudes in US and New Zealand. Most of our work we really focus and work with families and people who are affected by the tools. Our main principle is that engagement and community engagement ought to directly be hearing from, and be informed by, people whose lives are most likely to be affected by these tools. And so in the case of this, we worked with people who had aged out a child welfare system, or whose children had been removed, or who in some way had had contact with child welfare systems. My experience, I’ve been on national committees, and so on ethical data use, my experience is it’s really important for all of us to listen to and learn from people who themselves, are part of the system and who are subject to these tools. And you might see, you might hear a very different take on some of these issues, and you might hear from you know, the intellectuals that you hear from, people like me. The other thing is, we’re very focused on independent evaluation and the Allegheny, the AFSD, was evaluated an interim evaluation is available online. So, Jeremy who evaluated it, said that it increased the accurate identification of children who we needed for the intervention, without increasing the workload on investigators.
The other really interesting thing he found is that it reduced racial disparities. In particular, it made people more aware and more responsive to white children who were at risk, and were more willing to screen our black children who had a lower score. So their racial disparities fell considerably. So I’m sorry, I’m running out of time, but there’s a lot of places that have talked about our work. And please do feel free to – most of them are positive, some of them are negative, but it’s a useful kind of snapshot of the places where they are.
So, the next steps that we’re working with is really about the challenges of supervision. If anyone saw the movie the Trials of Gabriel Fernandez, it was a terrible story about a young boy in a LA county, who was killed by his mom and stepfather, and who had multiple call-ins to child welfare, very actively involved in child welfare. And a really interesting case, and the most striking part of the case is that the supervisors were taken to court over this for negligence, and maybe even manslaughter. And the trial, it’s a Netflix documentary about that. So, one of the things that they asked Greg Merrit, who was one of the supervisors, is why were you unable to keep eyes on this family as much as you needed to, because clearly people were not doing the things they were doing. The child welfare system was not serving this family well. And he sort of replies, I think the most children I had at any one time was about 280 children, and the interview asks how difficult is it to keep track of that many cases and he says you can’t, you don’t know every case you’re supervising, because there are so many and my workload was so immense. So, we’re actually working with some counties who have this problem to build out a sort of complexity flag using similar principles that give a supervisor a sort of single deck dashboard like that and allows them to drill down into cases and find out what’s happening and how collaterals are being contacted, and also to look for history of those cases. So, those are sort of some of the work that we’re doing. Please do visit us at uq.edu.au/CSDA, and you’ll be able to see more examples of that work. Thank you.
Prof Paul Henman:
Thank you very much Rhema. So, I think that’s a lovely combination of input from us, from looking at the broad practices within the UK, to coverage of the issues and practices across the globe, and really finishing up with Rhema’s talk about what are really practical issues of working and developing ADM in child and family services. And really working through and taking seriously the challenges as an academic, but also an applied researcher working in a very important practice field, about what this means. I want to now invite Carol Ronken. Now, I’ll share my screen again.
So Carol is director of research from Bravehearts, and also a visiting fellow at the school of justice at QUT, Queensland University of Technology. So, we’ve invited Carol to contribute our voices from the field. So, saying well, we are working within this area, what’s our reflection and response to the idea of automated decision-making and society in our professional field of work. We also try to get a number of people from government and also other areas, but we’re unable to, and maybe reflect the fact that people don’t really know yet, quite what automated decision making might mean. But thank you very much Carol.
Thank you, Paul. I’m going to start this off with a little bit of a disclaimer. I’m probably one of the most technologically challenged people that you’ll ever meet, so if I am using terms incorrectly, I do apologise. Having said all that, we have been looking at automated decision-making tools a number of years now, it’s sort of something that Bravehearts began looking at in response to some issues and concerns we had about, particularly the family law system and Paul, can I just get you to change the slide. Thank you.
So, for those of you who don’t know who Bravehearts are, I should probably just quickly mention that Bravehearts is a not-for-profit charity organisation based in Queensland, although we have services Australia wide. We’re very much focused specifically on child sexual assault and exploitation, providing services for victims and survivors, as well as prevention education programs. I’ve been with Bravehearts for almost 18 years now and a huge part of my role is looking at how we can improve systems and responses and best protect children. One of the the biggest issues that we’ve been looking at over the last few years is around the lack of information sharing. The difficulties around assessing risks for children, particularly when we consider looking specifically at child sexual assault, that often there are no physical signs. So often there is nothing that sort of stands out. You know, physical abuse you can often see physical signs of civil harm to a child. With sexual abuse you very rarely do get those indicators. We were particularly looking at the family law system, but at the same time, having thoughts around the child protection system. More broadly around the fact that there are so many actors, I guess, in a child’s life who see and observe behaviours, and indicate little red flags that might indicate that this child is at risk or is it harm. And those, for us, was something that’s really important in looking at the risk for abuse and neglect for children, and how we can actually assess that risk.
So, a few years ago were talking with a company and looking at developing a tool that would allow a number of different players to be able to input information and observations that they saw into – at the time we were talking about an app – and to be able to have that sort of straightaway central child protection services. Once it reached a certain risk level it would then trigger for human intervention, for someone to look at the files and look at what the actual observations and information being provided were, and whether or not it did signify that there needed to be an intervention for us. That early identification intervention of vulnerable children is absolutely critical, because it reduces the occurrence of child abuse and neglect more broadly and can definitely improve the life trajectories of children.
We definitely see that there are huge benefits to automated decision making. Having said that, we also have a lot of questions and considerations. Now we know that a lot of commentary around this is that there is so much human bias and erroneous judgments made by those who are working in child protection, for all sorts of reasons. And the case of Gabriel Fernandez is a really interesting one, and I’m really glad that was brought up. I’d watched that documentary a while ago and it’s really challenging, because these workers have such high caseloads and trying to be able to manage all of that and to keep on top of it all is absolutely huge. So, for us, ADM sort of allows for that potential to be able to spot patterns and correlations in information and data sets, around behaviours and observations, around children, and objectively assess or profile the level of risk involved. However, when I was looking at all of this, and I must admit, my knowledge of rounding’s is very limited, but I certainly had the question around whether or not those biases and prejudices that may be held, may still be buried in the data that people are actually inputting into these systems, and whether or not ADMs are able to deal with those biases and prejudices. I think that a lot of the information that’s been discussed already, by those who are working in this particular area, are really interesting. And I certainly believe that, aside from being able to intervene more appropriately and more readily, it does also allow for more targeted and effective allocation of child protection resources, if we’re able to more effectively identify children who are at risk. And Paul. can I just get you to change the slide, please?
When Paul asked me to have a bit of a talk, I immediately said yes and then immediately regretted it thinking I don’t know what I know and what i don’t know. But I’m always one up for a chat. So, I was more than happy to do this, and I started to think around what some of the considerations are from someone who doesn’t understand the whole process. And what I would need to know to feel comfortable about this. So certainly, the data sources where the information is coming from, and who is inputting that information is really important. I do think that data that’s around a child, from more than one source, will help create a bigger picture around that child’s risk. So, we know that there are so many actors in a child’s life; their teachers, their childcare workers, their sporting coaches, all these people in a child’s life who may be seeing those signs and indicators. They might be small things that they don’t actually trigger the thought of having to report that this child may be at high risk of harm, but as we always sort of say around child sexual assault, it’s like a jigsaw puzzle. All these people have these little pieces of the puzzle and being able to bring all of those pieces together is really important to building a proper picture of what is happening in that child’s life. I think for me also, there is the need to understand and have transparency around what those predictive algorithms look like. So, what are the inner workings of those models and those tools, that may be being used or considering being used. And I suppose and this is me – I’d sort of expect that for many people in the sector having that presented in a really simplistic, and easy understood way, would certainly help raise confidence in the sector around the use of predictive tools in assessing child risk in protective services.
So, understanding what the inputs are, the outputs, the weightings that are given to certain factors, etc, I think would be really important in being able to build that confidence and have that openness and transparency around the system accuracy, is also something that I think I sort of struggle with a lot. I mean, I know my background’s criminology and particularly around sex offenders, so I have a fair bit of understanding around risk assessments and the challenges around predicting future risks. And we know that there can be errors. So, you know, there can certainly be false negatives where we may end up leaving a child in danger. All those false positives, where children may be removed when they would not have been harmed, if they’re not at risk of being harmed. So, I certainly think that accuracy of these tools is absolutely critical to understand. And I was really interested in looking at the data that was presented just earlier around looking back when there were child fatalities, and looking back at the indicators there, and sort of seeing that so many of those children would have been able to have been identified earlier.
On ethics and privacy, I think is also a really interesting question around who has access to that information, the level of information that is shared to facilitate that intervention. I think that there are those legal and moral privacy issues that we do need to think about, and then choice as well. The right to confidentiality. And as someone who works in the sector, I always sort of think that, we need to put the best interests of a child first. But I still, as a very sort of left-leaning social justice type of person, think well, what about individual rights to privacy and confidentiality around their information? And I think that really does need to also be considered. And then oversight I think, is the last thing I just wanted to mention. And I have no idea how long I was speaking for, but for me oversight is also really important. And I like the idea that these tools are thought of as being supportive of decision making workers, rather than as the decision-making tool, because I think it’s really important that there is that human intervention, that you know, once a level of certain risk might be reached, that someone is able to look at the information and pull it apart to see whether or not there are legitimate concerns there, and what level of intervention is required. I do, I found it really interesting around this whole child protection system area, but I think I would also suggest that we do look at the family law system as well. It certainly is something that here in Australia, is being focused on a lot at the moment. And there are huge concerns around risks of children where there are allegations of sexual abuse, or abuse, or neglect more broadly, where the family law system has been leaving children in positions of risk because it’s saying that it’s not its job to investigate this stuff. But it’s the family law system’s job to be able to protect children.
So, you know, I think that something like an ADM that is able to not just focus on child protection systems, but other systems as well, including the family law courts, has some merit in looking at further. So, they were just my thoughts. I’m not sure if I’ve spoken for five minutes or longer. I do apologise if it seems a little bit simplistic, but perhaps that might give some indication around what those of us in the sector who aren’t technologically advanced, how we might be thinking about these issues. So, thank you again, Paul, for inviting me to have a bit of a chat and just share those thoughts, and I’m looking forward to hearing what others sort of think.
Prof Paul Henman:
Thank you very much, Carol.
Thank you for watching this recording of the event mapping ADM in child and family services. Following these presentations, we began a roundtable discussion with members from the university and research community, from the NGO and service organisations, and also from policy officers and various Australian state and territorial governments. Our roundtable discussion includes questions about what does automated decision making mean for child and family service practitioners, organisations, the services, and clients. We also explore the key questions of concerns and challenges that need to be addressed when we’re using automated decision making in this space.
This discussion also led to starting to identify the scope of a research agenda for people working in the space, the legal and policy considerations, and questions about how we actually may co-design automated decision-making for child and family services. A summary report of these presentations and also the following roundtable discussion is available. Please contact me on the email address above if you want to know more about the research centre as a whole. You can see the web address. Please go there, and you’re most welcome to join the mailing list. If you’re interested in this series, we’ll also be producing similar events around automated decision making and other social service areas. So please feel free to explore that and ask more if you want to know more about them.