ADM in Social Security and Employment Services: Mapping what is happening and what we know
13 October 2022

Professor Paul Henman, UQ
Professor Terry Carney, University of Sydney
Christiaan van Veen, Melbourne Law School
Professor Dorte Caswell, Aalberg University
Dr Simone Casey, Senior Policy Advisor, ACOSS
Watch the recording
Duration: 1:49:44


Professor Paul Henman:

Well, welcome everybody to this event on the early morning of May the 5th. Today’s event, run by the Australian Research Council Centre of Excellence for Automated Decision-Making and Society is entitled ‘Automated Decision-Making in Social Security and Employment Services: Mapping what is known and what we know.’

This is the second of a series of events run by the Australian Research Council Centre of Excellence. My name is Paul Henman and with Terry Carney from the University of Sydney, we will be both chairing today’s session.

So I want to first begin by acknowledging the traditional owners on which the University of Queensland, where I am located in Brisbane, the traditional owners on which our and the custodians of the land in which we meet, virtually. We want to pay our respects to their ancestors and their descendants and to recognise their continual cultural and spiritual connections to the country. These are the Yugara and Turrbal people. We also want to recognise their leaders past, present, and emerging. And we also recognise the traditional owners from wherever you are joining us today.

Thank you for everyone for joining us for this event by the Australian Research Council Centre of Excellence. My name is, as mentioned, is Paul Henman from the University of Queensland, and Terry Carney from the University of Sydney. We will be both co-chairing this event. So in terms of the overview of this discussion, we have invited three people who have worked in this space to talk about what we know about automated decision making in social security and employment services. And Terry will introduce them in due course, however I have to apologize first that professor Dorte Caswell, from the University of Aalborg in Denmark is unfortunately ill and is unable to be with us today. And so we will be having some input later from Simone Casey from ACCOS, around the use of digital technologies in employment services. But I also know that people such as Joe Engold, Greg Marsden, have also done work in digital employment services and may wish to contribute later on in today’s session, from their own experience. So I’m putting you on notice, both of you.

So today’s structure is having some academic input, but we also have input that was really focused more on, well, what does it mean for these digital technologies from the field – whether it’s from practices, practitioners, professionals, adversity bodies – and we have three people from that field. Terry will be playing more of his legal practitioner role rather than his academic role in that place. And the whole purpose of this discussion is to really build the Australian Research Council Centre of Excellence’s understanding of what’s going on in automated decision making and social services. And we’ll set aside half of the workshop for a discussion and that’s one of the reasons why we’ve left this event to be fairly much a closed event rather than a public event, and deliberately including people who have diverse experiences to share around this topic.

I just want to acknowledge a number of our attendees. We have attendees from a wide range of non-government organisations and I won’t go through all of their names there we have it obviously. I want to acknowledge the colleagues of ours from the Australian Research Council Centre of Excellence for Automated Decision Making and Society, and I also want to acknowledge that we also have some attendees from government services Australia.

So welcome to you and I think there may have been some additional ones joining us overnight as well, so please feel free to contribute to our discussion all of you, as we move on to the second part of today’s activities. I guess the first thing is to say where does this context fit in? So social services is a focus area of the Centre of Excellence. The purpose of the social services area is to look at the way in which automated decision making is being used in social services and we consider social services to cover a wide range of areas.

This is the second series… of a series of events like this. The first one we held in November last year and it was around mapping automated decision making and child and family services. The video of that event is available on the Centre of Excellence’s YouTube channel and a written summary report of the proceedings will be available in due course. We will be doing the same for this in terms of providing the YouTube video of the presentations, but not the discussions. And if you have any concerns about your presentation or your involvement in that, please let us know. And we’ll also be providing a short report of the findings, particularly also picking up what the discussion talked about, what are some of the key issues that the group as a whole came up with.

So as I mentioned, we already had an event on child and family services and now for the rest of the year, we’re planning events on looking at disability services and criminal justice. And we’re also planning a series of keynote seminars looking at the way automated decision making intersects with questions of gender, race, disability, and other forms of social disadvantage or social characteristics.

So where might we be thinking about employment – automated decision making in social security and employment services, and many of you people are attending will be quite aware of that, but for those that are new, we need to understand that automated decision making is really about the use of digital technologies and algorithms to automate all parts of human decision-making processes. And in the social security system, we see that has been used for automating eligibility entitlements, calculating rates, ID, providing payments, checking compliance. We also have risk assessment tools or risk of long-term unemployment, risk of overpayment or fraud, decision support systems are sometimes used in particular locations. And I think as we go through today we’ll be covering some of a range of ways in which these digital technologies are emerging, in the way in which delivery and operation of social security employment services.

So I’m going to hand over to Terry now and Terry, unmute yourself and I’ll let you share the session from now on.

Professor Terry Carney:

Thanks very much Paul, a great introduction.

My task is to introduce people and keep us to the time allocated. We do have a little bit up our sleeve in one sense, in that we’ve lost one of our key speakers, but we want to make sure that we pick that up again by extending the discussion. So without any further ado, I’m delighted to welcome Christiaan Van Veen, who’s the director of the digital welfare state and human rights project, at the Center for Human Rights and Global Justice at NYU University. And we’ve got a nice little slide there, I see, thanks to Lata, our support person up in Queensland. I’m down in New South Wales, a long way away. So Christiaan, the floor is yours and your 12 minutes starts about now.


Christiaan van Veel:

Thank you. Thanks so much Terry for inviting me, for the kind introduction. I’m based in New York, as you might have got. So it’s a late afternoon for me, so I don’t have the start-up issues that you might be having over there, but I have end of the day issues – getting tired and longing for dinner.

So I’ve been working on issues at the intersection of digitalizing welfare state and its implications for human rights – mostly of poor and marginalized groups – since 2017, roughly. At that time i was a senior advisor on the mandate of the United Nations special operator and extreme poverty and human rights. Your fellow Australian Philip Alston, and in UN country visits that we organised both in 2017 to the United States, and to the United Kingdom in 2018. We started addressing the implications of digitalisation in the state, mostly in the area of social protection. And those visits especially the ones to the UK resonated very much with various civil society actors who started writing to us after those two country visits, and we made the decision to devote a what is called ‘thematic report’ to the UN general assembly on on digital welfare states and human rights, in the fall of 2019. And that was a report for which we held extensive consultations. We ultimately were able to get input from about 60 actors, including about 20 governments civil society actors, academics from a total of more than 30 countries, and I think that report still a good summary of a lot of these issues that we’re discussing today.

As terry just said I also lead a project at NYU which further investigates systems of digital welfare and human rights implications, and that also tries to be a hub for student researchers and practitioners, very importantly to discuss these issues and to further raise the sort of human rights impact profile of digital welfare. So, my contributions today are based on both my UN experience and experience on my project. Now to briefly to start off on the question I’ve been asked to answer – namely, where is ADM being used in income support and employment services, and in what way is it being used? I must admit that I often have problems with terminology in this field.

It’s not a particular criticism at the question here, but more generally speaking. Because despite Paul’s definition just now, there’s obviously no official definition of automated decision-making. And what I tend to see in discussions like this, is that that can relate to a whole range of technologies on the cooler spectrum of things, to sort of more mundane uses of technology. And what is also, I think relevant here, is that that whole term ”decision making, is not as fixed as we would like it to be. I mean, I have a background in administrative law and obviously in administrative law, the term decision is fairly well defined. But I think in this context it’s obviously broader than just a decision on their domestic administrative law. And so again, sort of the scope of what we’re talking about enlarges as a result of debt, and then to say as Paul also underlined, when we say automated decision making in this particular context there are hardly any cases of purely automated decision makings. There’s no total automation as of yet, so there will always still be humans involved in these systems in some way.

Second sort of caveat to make before I start about sort of, the examples, is knowledge related. So while the whole process of automating decision making in social security is not a new phenomenon as we probably know, I do recognise that the surge in attention to these issues is fairly recent and probably overlaps with my own sort of being drawn into this field only a few years ago. And perhaps because of that only very recent spike in interest in these issues, it’s quite striking to see how little we still know about automation in government, in respect to social welfare. And to give you just one example from my home country of the Netherlands, just a few months ago our general audit office – the government office – released a report on the use of algorithms in central government and concluded, quote “that they deliberately did not aim for complete inventory of all algorithms used by the central government”, simply because that would be an undoable task. So they scratched the surface in other words. And so I think it’s telling that you know, in a relatively well modern state like the Netherlands, there’s still fairly little idea overall of what kind of automated decision making is taking place within the state. Of course that’s changing now, we see more reports coming out.

And the final caveat is that we often tend to discuss automation in relation to western states, yet my own research and that of others indicates that it’s definitely not a phenomenon exclusively happening in the west. Even though we know far less about what’s happening outside of the the west, so to speak. So with those caveats in mind there’s a few brief things I have to say about where ADM is used in income support and employment services, and in what way.

Obviously, this is just a brief overview and for me it’s helpful to categorise ADM relation to the different tasks of the state in those areas. So first of all, there’s identification. Obviously in order to pay out benefits, governments require that individuals verify who they are, what their official identity is, and that process of identification and verification of identity. Identity is increasingly automated, one example for instance is the development by the government digital service in the United Kingdom, of the Verify platform for identity verification by government. That is a system in which private identity providers were accredited by the government to plug into an online platform to provide identity verification services, and an individual wanted to access a government service – say benefits – then needed to verify her identity via this Verify platform, which often involved being directed to the private provider’s website, entering personal information which was then matched with available public and private databases, to verify whether someone was real and said who she said she was. And that system is now probably going to be turned off, because of frequent failure.

Second category of tasks in which automation happens is eligibility determination. So in order to pay out benefits, governments require that recipients are eligible to receive those particular benefits. Again, quite obviously – and whether that now happens in a government office or from your home online – part of that determination of eligibility is increasingly automated, especially in the in the west. So for instance, this comes from a recent report from the United States. State governments in the United States have adopted algorithm driven decision making to assess disabled people’s eligibility for home and community-based services under the Medicaid program. For instance, another example which you also quote in the GA report is the province of Ontario in Canada, where the legislative framework for the Ontario Works Welfare Benefit program was transformed into quote “multiple drop-down menus and check boxes in the social assistance management system”, so that the software’s algorithm can evaluate the data entered into these fields and produce decisions about an individual’s access to benefits. That’s from a thesis by Jennifer Vasile, a very impressive Canadian scholar.

Then there’s payment calculation and that’s closely related to eligibility determination, yet I would like to sort of separate that as a specific task in social security administration. So how much does a beneficiary receiving benefits on a regular basis, which is obviously something that’s subject to change depending on changing circumstances. Now again for instance, in the context of Medicaid benefits for home and community-based services in the United States that I just mentioned, an algorithm is not just used to determine eligibility, but also to determine the amount of assistance that an individual receives – either budget or in-kind assistance. In the United Kingdom where you have the Universal Credit Digital Welfare System, benefit payments are calculated by comparing tax authority data with the data that the benefit authorities hold on beneficiaries, via what is called the real-time information system. And that also allows for the automation of calculations, and that also allows the system to fluctuate payments on a month-to-month basis, based on what someone earns. Then another task that’s relevant here I think, is conditionality compliance and sanctioning. So as you will know, most social benefit systems nowadays attach conditions to receiving payments, especially where it relates to working age benefits. And increasingly we’re seeing that automation is happening in relation to compliance with those conditions. So for example, in Sweden two years back, it’s been reported that benefit authorities there – unemployment benefit authorities – automated the process of checking whether those on unemployment benefits actually met their conditions, and they also issued automated warnings and sanctioned automatically when they failed to comply- from a report from Algorithm Watch in Australia, which you probably know better than I do.

What i find interesting is that in the cashless debit card pilots, which of course have as their whole reason to ensure that those on those cashless debit cards only spend their benefits on the right kind of expenditure. I understand that the government not so long ago piloted – together with a company called DXC – technology software that would enable the government to automatically block specific purchases of forbidden products like tobacco or alcohol. It’s of course also about ensuring compliance with conditions attached to benefits. There are two more categories that bear mentioning. I think one is fraud detection which was already mentioned by Paul, of course. So I think this is a big category. I think a lot of social security authorities are prioritising their investments in new technologies in ways to reduce various forms of benefit fraud, and that’s why you see a lot of automation as a result.

I was involved myself, in litigation in my home country of the Netherlands around legislation and a system that was called System Risk Indication, or Siri in short, which was a legislation that allowed government actors to combine a broad range of data held by public authorities, to combine that into an algorithmic risk assessment tool that assessed which beneficiaries of low-income welfare benefits were more likely to commit benefit fraud. And we can talk a bit about the outcomes there, but that’s a good example of what’s happening in many countries. And finally, just to briefly flag, there’s the area of adjudication. So cases in which someone seeks a remedy for a decision taken in the welfare context. We see increasingly that there’s automation taking place there, as well as social security administration for instance, is experimenting with the use of AI to sort of streamline the appeals processes. So let me briefly talk – Terry if you allow me a few more minutes – about the human rights dimension I was sort of trying to summarise. But it’s hard to get sort of – to avoid getting stuck in examples here – I think since we’re only recently understanding the breadth of uses of automation technologies in the welfare state, and social security specifically, it’s also fairly early days in understanding that there’s a major human rights dimension to this that’s not yet fully understood.

I remember when we visited the UK on meetings with members of the Administration Department for Work and Pensions for instance, they looked a bit puzzled when we said that one of the major themes of our visit of a UN human rights visit would be about benefit administration and digitalisation. I really didn’t get that at the time, so part of my project has been sort of raising awareness about the human rights threats that are involved here. And in terms of that, I think one recent bit of positive news was that the EU commission just released a proposal for regulation of artificial intelligence in which they acknowledge – in part due to our extensive lobbying there – that automation in relation to social benefits and the use of AI there, is a high risk endeavour, which I think is a positive step forward because it means more protections.

There are three major human rights related risks that I want to flag here. First, I think often not fully appreciated in these debates, is that in my view the most important threat – the most important human rights threat here – relates to exclusion of the most marginalised individuals from access to social rights all together. And I think sort of that deliberate or in advert cutting off of vulnerable people from government help, with the use of new technologies, should be a priority human rights issue that we address. Secondly, there’s one big obstacle to addressing human rights violations in this area and that is lack of information, combined with the complexity of the subject matter here. In many cases that i’ve been involved in, it’s difficult to find out what happened in the first place let alone understanding what has happened and who is responsible. And just to mention one example, the serial litigation I was involved in, the complete lack of information on how that algorithmic model worked, led the court in that case to lament in the following terms: “it was unable to assess the correctness of the position of the state of the precise nature of Siri, because the state has not disclosed the risk model and the indicators of which the risk model is composed or may be composed in these proceedings. The state has also not provided the court with objectively verifiable information to enable the court to assess the viewpoint of the state on the nature of Siri”, which is pretty a damning critique.

In a report on Medicaid that I just mentioned, there’s the following quote. “In 2012 attorney Richard Epping at the ACLU of Idaho, began receiving call after call from people who discovered that the state had slashed their Medicaid benefits, but had no idea why that’s a pretty fundamental problem to doing human rights research and advocacy.

Final point to make, and I think an important realisation in relation to these issues especially in western states, is that they are often sort of experiencing what some economists call being in a ‘no-growth economy’. And that means that there will only be increasing pressure to control welfare state expenditure in the future. And against that background I think the use of digital technologies and automation to save costs, the pressure to do that will only increase in years to come. And for anyone who cares about human rights, that means we have to do more than just react to problems that emerge after the fact. I think we also need to be more involved in discussions upstream on the transformation of the welfare states as a result of these pressures, and to formulate what human rights friendly digital welfare means for future systems. So we need to be more responsive to that call, than we have been in the past. Let me stop there, sorry to slightly go over there.

Professor Terry Carney:

Thank you very much Christian. I unmuted myself, yes. Very good. We ran for about 20 minutes, you’ve probably realised Paul. Very pleased to introduce my colleague Paul Henman who’s our next speaker. He’s a chief investigator on the Centre of Excellence at the University of Queensland, and he modestly says for about 20 years or so – a few decades he’s been working on the nexus between digital technologies, social policy, and public administration and implications for citizens, in particular. But I’ve known his work, for it seems to me all my life, and impressive it is. Paul the floor is yours, and your time – your 20 minutes or so – no more, starts now.

Professor Paul Henman:

Thank you very much Terry and welcome to everybody. I have to say that as Terry’s introduction might suggest, this has been a passionate area of mine and I don’t need to, I guess, go through my past work. But I guess I wanted just to say firstly that my training was in computer science before doing a PhD in social policy and sociology. And part of that PhD involved an ethnographic study of the Australian Department of Social Security. Following that I worked in the Department of Social Security which became Facts and Facts here, and then FaHCSIA here, I think. And so over that long period of time since returning to academia, I have continued to be interested in the way in digital technologies are used by the government and most particularly in the area of social security, such as welfare. I guess in terms of this, Christian’s covered very much a great expanse of what digital technology is doing. I want to go more in terms of the more specific, but historical. So it is not – digital technology is not new. For example, in the UK the national insurance and pensions agency introduced digital technology in 1959 to start to automate and to manage their cases, their contributions, etc. Australia introduced and started paying electronic payments in 1967 and in the early 70s the local authority social welfare organisations, it started to introduce technologies in the early 1970s. As a result, as technology has changed and made new opportunities, we see the introduction of new churches and services, so that with networked and online systems we had tele service centres emerging in the 1980s, we have large-scale data matching occurring in the 1990s, and at the present time we’ve got uses of smartphone apps, voice recognition, and chat bots coming into play. So obviously there have been new developments in digital social security. One of those has been the Robo Debt, or the online compliance system that has been international attention for many years, and I won’t go over that. Something that may be less visible to people, and our Services Australia colleagues and David Brown may wish to speak to the welfare payment infrastructure transformation program. It’s a one and a half million-dollar program that’s over seven years to actually upgrade and provide greater flexibility around policy and service delivery. Importantly, there’s a huge intersection of modules, through the use of external providers and in the employment service areas, which i think Simone Casey will be talking about later, is the use of new models for online employment services and targeted compliant frameworks.

So one of the key things I’ve been grappling with is, what does this digital technology do to the nature of social policy? What does it do to the changes in the way in which administration’s occurring, and what does that mean for users of the system? So the first thing to know is that technology is largely introduced in a way to automate. That’s the first thing. That’s the first thing of what it does. And as Christian said, that they’ve automated eligibility and pay rates, and then we found in the early 1980s a case where the department of social security computer cancelled an age pensioner because they’ve – or in an automated way – and cancelled their payment of the age pension because that person hadn’t returned a letter which wasn’t registered in the system. Now that went to the federal court because the person said, well this was wrong cancellation, it wasn’t legitimate. And the court actually decided that they couldn’t overturn the decision because the decision to cease was not made by a human it was made by a computer. And at that stage the law did not recognise a computer decision. And so, since that time you’ll find in the Australian legislation, the authority to delegate the secretary’s power to both human and computer decision makers, a key part of that automation is also the crowding codification of policy. And the codification policy has reduced discretion, but at the same time it’s helped to increase consistency and also importantly has been helpful for understanding citizens rights about – particularly in countries where there was high rates of discretion and social assistance – decodification has been really important for recognising rights.

The second area I think that’s really important to understand is that we have seen a greater growth of differentiated and targeted policy and service delivery. This is driven partly by the need for personal eyes, also triage, and underpinning a lot of that targeting is both risk profiling. So for example, since the 1990s we’ve had the job seeker classification index but also as Christian mentions, we have press profiling in terms of overpayment in social security systems as well, who are the people who are likely to do that. And early on they were based on heuristics that people had, and more recently it’s been developed through data mining processes.

The third element is that we see a greater use of conditionality so that we can make benefits conditional on different behavioural or circumstances and use computers to help do that what’s significant. Also, is that we see increasingly cross-portfolio conditionality. So for example we’ve had proposals in Australia for parents receiving child payment or family tax benefit to have that ceased if their child is churching from school, or if they’re not immunised etc. Here we see two different policy areas – education or public health and social security – social insistence being brought together. And this has only been made available through our digital networks that are able to match these data. Both these two elements have led to increased complexity through both the conditionality and through the differentiation of policy. And more recently, the big administrative data that’s been made possible over the many years and decades of our systems, has led to greater big data analytics to understand the dynamics of welfare, importantly, to try to work out how do we intervene early on – just as we know in early intervention and health and disadvantage can be quite very useful. And that’s the idea that’s underpinning the New Zealand social investment model, and the key to those is the introduction of artificial intelligence and machine learning.

Now how do we make sense of all of those things, what in terms of conceptually – what are the ethical legal social implications? Now I don’t want to rehearse these things but there have been a groundswell of interest in thinking about the ethics and human rights implications of digital of artificial intelligence. And this is a really welcome development because artificial intelligence, that’s come along and really raised questions about well, what does this mean for our operations of governments? What does it mean for operations more broadly by commercial sectors? But the important thing I want to make clear is that the issues that are relating to the use of artificial intelligence or emerging possibilities for artificial intelligence machine learning, in social security, are really just an extension or a continuation of the non-machine learning, non-artificial intelligence algorithms. The question is, how are they being used and for what purposes?

And so we need to make sure that it’s not the AI that really matters. And though the AI adds some new twists to it, these issues are fairly much long-standing issues that have been largely neglected until this more recent phase. And I want to emphasise that because you can see on the bottom left-hand side of my slide, that a report in 2004 from the Australian Administrative Review Council wrote about automated assistance in administrative decision making. So it has been a consideration for many years. But what are some of the implications? And I think I just want to go there – quite a lot we don’t want to go through all of them – but I want to. Firstly we know that privacy and data protection is something that’s been dealt with and considered for many times, and related to that is also the question about surveillance, that people talk about multiple times, the other aspects. And I think we’re reinforcing Christian’s first point about access, and it’s about how do people access these systems when they become increasingly complex? When we have a computer culture that says the computer is right, that tells, if the computer has accurate information it must be correct, which can become a burden. I’ve seen that in my own research about recipients of Centrelink not questioning the competence of Centrelink offices, and not the competence of the computer. I’ve seen IT systems that have been the problem, and this can lead to what Michael Lipsky calls the bureaucratic disentitlement, where it’s not the actual policy or the legal things that disentitle people, but it’s the processes, the administrative burden.

There are also challenges about acting on risk prediction. So rick’s prediction is a prediction, it’s not a reality, and we need to be careful about the way in which we do things. Particularly if we do it, we introduce coercive types of behaviour based on assumptions about what people may do in the future. If we do it in a supportive and a facilitative way, then that’s not as problematic. The other problem with a lot of risks and predictions, particularly that are used in other areas like child protection, is that they’re not dynamic. They don’t pick up on when people’s change of circumstances. And we need to think about, how do we develop modelling – predictive modelling – that can be more dynamic and take into account ways that people have changed, so that they are not always blighted by their past history 10, 15, 20 years ago. And also use those systems in which to enhance people’s capacities to be better in what they’re doing. We also, there’s a lot of discussion about algorithmic bias and the way in which algorithms embed assumptions about the world – whether that’s by computer programmers, by machine learning – there’s questions about accountability and transparency. And the idea that computers are black boxes and whether that undermines or challenges the review and appeals process, and the important thing about inhuman insight. So we had humans that made administrative professional discretion decisions to recognise the diversity of people we moved into a one-size-fits-all process. And people said we need to have that differentiated capacity. Now we moved into at one algorithm bitsolar approach, where we think even though different people are different, we can differentiate our policies. But we know increasingly from our AI research such as facial recognition, that they do well on certain groups of people but do poorly in others. And similarly we need to start to think about, how do we design our system so that we actually flag different types of people that are not dealt with by the algorithm, but are you dealt with and supported by people. Just as a way in which banks might automate a whole range of things, but for people who have complex situations. They might go into a bank branch and work things out.

The last two things I think are important to think about is the way in which we can design technology. Who is it designed for? So typically we design our systems from the interests of government, the interests of the administration system. But how do we design it for people? So for example there’s been research looking at where people using the child’s support system, about apps that help people as separated parents manage their situations, and that helps them manage their child support relations. Similarly we need to think about what are the ways digital technology may help us. Centrelink recipients or social security recipients manage their relationship with Centrelink. How do we help them to know what they need to know? An example of Services Australia Centrelink website has been a continuum vein, because it’s actually not user-friendly from standard English, and that’s something that could easily be dealt with.

And the last thing I guess I wanted to mention is the importance of power when we’re dealing with disadvantaged populations. There is a tendency for political power to overarch that, so we introduce technologies to disadvantaged populations that we wouldn’t consider introducing to the tax system. And so for example, Robo Debt is one of those things – an example where the digital technology created significant changes, but it was more the politics around that, more less, that created that problem. Rather than the actual digital technology itself. So what do we, major considerations for future of digital social security.

First of all, it’s not just about the technology. It’s about the intersection of technology, humans, and power and politics. How do we get accountability when we increasingly have the industry partnerships involved? So when we disperse accountability and responsibility to private sector, how do we engage with that? What is our capacity to challenge the accuracy of the code? So for example, Robo Debt showed the way in which individuals could question their own decision, but there was no systematic way to actually challenge the code itself. We don’t have the legal capacity to do that. Class actions can be helped but they actually don’t question the tech/the code, they question each individual decision I mentioned – just dynamic risk profiling. Also I mentioned about making social security systems, digital systems that are user-centric. And the last thing I guess, is how do we move towards legal governance and digital oversight of the overall system? And I think the new guide for artificial intelligence drafted by the EU provides some really interesting insights that we as Australia, could consider. So thank you very much.

Professor Terry Carney:

Thank you very much, Paul. And spot on time. And I can now hand back to a competent chair who doesn’t mute themselves half the time, to take over the last three of us who are speaking. So it’s all yours Paul.

Professor Paul Henman:

Thank you very much Terry, for sharing that session. So I’m going to introduce Terry. So Professor Terry Carney Emeritus Professor from the University of Sydney School of Law. Now Terry, rather than being his emeritus professor role, we’ve asked him to talk with his experience as a lawyer involved in the social security appeals tribunal, and the Australian administrative appeals tribunal, to talk about the way in which technology impacts on the adjudication processes. So thank you, Terry.

Professor Terry Carney:

Thanks Paul. Just about to, I couldn’t share until it came up. You should now see my slides. Thank you very much. I’ll try to get across this in five or six minutes at the most, if I can because there’s a lot of detail on these slides. So you’ve got the outline of the things I’m going to talk about, and I guess the take-home message is that, as both of the previous speakers have indicated, there are a whole host of issues that need addressing to get the balance right between the undoubted benefits of AI which ought to outweigh any concerns and limitations. But in the early stages, at least in some of these sectors, the lack of attention I guess, has led to the problems becoming significant and I guess the message is that, in the area of employment services the difficulties probably at the moment, are outweighing the benefits. And regrettably for lawyers – but no surprise to non-lawyers – none of the existing institutional review and other arrangements, seems up to the task of redressing even the individual concerns, much less the systemic ones that Paul and Christian outlined previously. So, by way of some backgrounding, I’ve done, and others have done work in this space. Mark Considine’s work over many decades sets the context of people not familiar with Australia, on the fact that Australia was the first and still I think, is the only country in the world to fully privatise the task of finding employment; helping people to find employment when they become unemployed. In other countries it either remains as a government, fully government, or it’s partly privatised. And the full contracting out of employment services magnifies some of the problems, because of the number of private sector providers that are involved. So there are certain features there that you can see, on this slide, of the way employment services and AI fit together.

We in Australia, were early adopters and vigorous adopters of activation the activation policy from the OECD many decades ago. So there is a lot of conditionality that comes part and parcel of giving people the opportunity to develop skills. And so that might help them to get back into the workforce which is the theory of the positive side of activation. That they’re one of the implications of contracting out a service is paradoxically that government usually proliferates much more than the old internal government regulatory system, the number of relationships that government imposes upon the private provider of the service. And they’re designed to ensure value for money for the government, but also that whatever policy settings are reflected like activation policies and the people who don’t follow the rules. If there’s a sanction involved that that there’s some fairness in the assessment of – whether the sanction the reduction of payment or the stopping of a payment – is appropriate. So all of that adds to that complexity, I guess, that Christian talked about earlier today. So it’s a very complicated system and one where that individuals find it difficult to navigate. So looking at the AI challenges, quickly we have a tool to predict the level of difficulty, if you like, that the person who’s become unemployed is likely to facing returning to employment, so that you can calibrate the level of service that is provided to the person while they’re unemployed, to help them back into work. And of course that relates to how much the government is going to pay the provider to provide that service. Now statistical profiling has a whole host of difficulties in this area. The international research – some of it is cited there on that slide – canvases the issues that arise when you try to develop a tool. Our tool in Australia is a logistic regression tool, so it’s actually less sophisticated and therefore has more chance of being of leading to unintended or inappropriate results, then would be the case if it was a machine learning AI method. A number of overseas countries, their tool is a machine learning tool. It’s not unproblematic but it’s less problematic than one like the Australian one, which as I say is a logistic regression methodology, and of course like overseas, no transparency as to the underlying calculations. We could move to the next slide.

Ok, so the number of people who’ve written about the sorts of issues that crop up in this type of taxonomy of artificial intelligence, in this sort of setting – I won’t take you through the piece by Simon Chesterman is particularly helpful, and Paul’s own work that he’s already touched on. But indicating that problems magnify as you move from mere automation and go up through the more sophisticated range of options – up to the machine learning and other more complicated I suppose – is this short way of putting it, and as the Olsen report, the UN rapporteur, Phil Boston’s report on the very distinctly dystopian view of automation in welfare. I mean, it’s arguably too dystopian, but only by a tad. And it indicates it’s not just Australia that is confronting the need to think about how to address these risks, but it’s a worldwide phenomenon in both the western world, and more so in less developed countries.

Okay so what can law and the administrative appeals tribunal and so on, do to fix this. Well the short answer is, really not much. There has been a long history of digitisation in welfare and in welfare administrative review on the tribunals, but the AIT does not have jurisdiction over the most sensitive and critical – not sensitive, critical aspects of life, for an unemployed person. Dealing with a contracted art employment service provider, for example, the AIT is not allowed to consider or review a decision by the instrument that determines the degree of difficulty of finding you a job. So if you actually have a much more complicated situation and need a great deal more assistance than you’re getting, and that your problems are because you’re on one of the lowest levels of funding for your provider, and that’s what’s giving you angst really, and giving your provider angst, there’s no way that you can go to the AIT as an individual and say ‘I want this classification reviewed.’ There are internal departmental processes and they usually work pretty well, but if push comes to shove, there’s no way you can go to the external referee if you like, the tribunal, and have that decision reviewed. There’s a number of others to the same effect, that bear out my point that the things that really matter most to an unemployed individual on our job seeker payment – as we now call our new start payment – can’t really be addressed at all. Or if they can be addressed, they can only be addressed at the periphery. So there are some decisions that the tribunal can look at and say, well we don’t think this is right, but you can’t remake it the way the tribunal thinks the decision should be made. It can only be sent back to be remade, no, usually that works. But there’s no guarantee in the sense that the tribunal normally delivers. So moving to the next slide, thinking are there any solutions. Really not a lot, and material that’s already been talked about today by Christian on some of the overseas legal challenges to systems – we don’t have anything akin to the EU’s either. Their recently released or leaked standards, or even the general data protection – that’s what the GDP stands for – regime, all of which go some way at least in Europe, towards addressing some aspects. Particularly the privacy. Some of the items on Paul’s list and Christian’s list of areas where more thought and work, and systems, and ideas need to be developed. It’s a blank space for us in Australia. But actually in any event – I’m not sure and nor most of the people whose work I read overseas – not too many people are confident that these legal solutions are really any significant major part of a solution. They are one small part, but anybody who thinks that adopting the European regulations in Australia would be a panacea frankly, I think, is living in la la land. That’s my point. And it’s not just me thinking that you’re living in la la land in believing that lawyers and new laws or new legal avenues is the main part of the solution. This, as I read, the international literature is also a pretty strong view worldwide.

So I’ve gone longer than I intended, but if you go to that very last slide, this is what the EU’s league draft regulation is proposing. And yes, as Christian said, it identifies these sorts of areas – and the sorts of issues I’ve been just skating over, and saying we can’t really solve through tribunals in Australia – it says yes, they are a really high risk. It’s labelled as a high risk area of administration and social policy generally, but the EU’s regulation doesn’t go beyond self-regulation of the balancing-out of the risk against the protections. Yes it’s a 40 or 50 page regulation, it takes you half a day to read it, but it is quite good in delineating on the one side, all the things that are risky, and on the other side, identify some of the avenues for dealing with that risk. But at the end of the day it says it’s a really high risk area, this aspect of social policy, because of the very vulnerability that Christian mentioned of the social security recipients, or the unemployed seeking work etc. But yeah, so it’s super high risk, but we’ll leave it to self-regulation in the main to to provide the remedy and I think most people are looking at the way self-regulation operates in other high-risk areas, you know, would have a number of big question marks about how adequate that might be. I love my cartoon on rainbow dead and I mean that’s what we want to avoid, but we don’t want the brilliant potential of AI to be cruel by lack of attention, either. In the formulation within government or elsewhere of the measure of public policy, in considering what machinery we need to ensure that the the balance and remedies, and so on. Thanks.

Thank you. I apologise for staying over, but that’s me, thank you Paul.

Professor Paul Henman:

Thank you Terry. Now it’s my turn to introduce Dr Simone Cassey. So Simone has recently joined ACOSS, the Australian Council of Social Service as a senior policy advisor. ACOSS is also a partner organisation of the Centre of Excellence for automated decision making and society, and Dr Cassey was formerly of Jobs Australia, and recently completed a PhD looking at employment services. So welcome Simone.

Dr Simone Casey:

Hi, thank you everyone. Thank you for the organisers for including me in this event. I’m really excited to talk about digital developments in employment services, and I particularly today just wanted to help you visualise what this looks like from the point of view of people using those systems, and to explore some of the implications of digital employment services by using a short case study.

So we seem to have lost my slides but while we’re getting them back I’ll just go through my background in a little bit more detail. So yes, I’m currently a senior policy advisor at ACOSS and prior to that I was a research associate at Per Capita, and I had a period volunteering for the Unemployed Workers Union as a policy advisor. And prior to that I was at Jobs Australia for 14 years, and there I had the opportunity to observe digital transformation of employment services over a number of decades. And towards the later part of my tenure at Jobs Australia, I also had the opportunity to get very close to, and learn a lot of the detail about the targeted compliance framework – the roll out of that – and I undertook the same training and immersed myself in the same guidelines to understand the system from the point of view of the providers. So i’ve also observed all of these digital transformations with interest as a student of welfare conditionality, the ethics, the efficacy, the distribution of power, and the locus of decision making control in employment services. So i’m totally fascinated by this whole subject and i’m going to try and walk you quite quickly through what digital employment services look like in Australia today. You have to excuse my croaky voice.

So, first of all the important thing to understand about the Australian context is that we’re well the way through a digital transformation that is significantly shifting the way that employment services have been administered in Australia to date. This has been part of a transformation process that’s been underway for a number of years, and in July 2022 we will move to a completely new model where there will be three types of services and only 50% of job seekers are expected to continue to be using face-to-face services. The others will use digital services in which they will have access to various levels of support from a contact centre, which will actually be being run by the government, by the department of employment. So that also signifies a shift to some renationalisation of employment services to some extent. However, the people who are using digital employment services – and there’s been quite a number already, because we’ve been in the stage of various trials over the last couple of years, and also with the tsunami of unemployment from Covid last year – there were a large number of unemployment claimants who were placed into digital employment services. So what does that look like?

So on the next slide we have an image of the digital dashboard. So most people using digital employment services engage with what’s called the digital dashboard and that’s really like your one-stop shop visualisation of what your job seeking requirements are, and the extent to which you’re maintaining your compliance with those job seeking requirements. So when the targeted compliance framework was introduced in 2018, and the existing job seekers were assessed for their digital readiness, by basically just assessing whether they had access to a digital device at the time that they were asked the question. So most job seekers, whether they’re receiving services face-to-face, will be using some form of digital interface relating to their job search requirement. And it’s really quite significant in Australia that the first digitisation initiative focused on maintaining job search compliance, rather than anything else that might be related to trying to find a job. Now i have to say the digital dashboard that i’m showing you on the screen now, does include elements further down where you can set up ob matching alerts and so on, and there are quite a few different sort of AI and nudge technologies behind this now, which will offer you jobs or offer you motivation to be increasing the number of activities you undertake, and things like that. So this is what the digital dashboard looks like. And this is the real, the cornerstone technology of digital employment services in the present, and as we complete the transformation to the new employment service model in 2022. So you can see on that screen we’ve got a dial that, in this case, has some shows, some red points on it, and this means that the person using this dashboard has accumulated demerit points, and demerit points are part of the targeted compliance framework which show you which accrue until you actually receive a financial penalty. But you do actually get payment suspensions when you fail to meet a requirement, and then you are required to undertake a re-engagement activity depending on what the failure to complete the original requirement was. And that’s really quite significant because what I’m going to talk you through in a second is what happens when that goes wrong.

The dashboard also shows you what your current job search effort is. So in Australia we have a default requirement for most job seekers to undertake 20 job searches per month, and that means uploading evidence of having applied for a job rather than just ticking a box to say that you did that. So you actually have to supply documentary evidence. The dashboard also shows you what your tasks are – provider appointments and so on – that are coming up. So on the next slide I’m just going to talk you through a little bit of a scenario when things go wrong, when you’re unable to maintain your compliance with the requirements that have been set for you.

So the first thing that happens is that your payment is suspended and you have to complete a re-engagement requirement. So this screen is showing you what’s in the some of the guidance provided to job seekers, about how to navigate or negotiate things on their digital dashboard. So if your payment is on hold, which is a new terminology we’ve been using for payment suspensions, it tells you that you have to complete your requirements. So the red box is saying, in this case, that the person has to complete 10 job searches to get their payment back. And I’m going to talk about the significance of that in a second. And then there’s also another banner that comes up saying you’ve got one demerit point. So at this stage we can already see that as soon as something goes wrong in a job seeker being able to maintain their compliance, they’re already getting these warning messages, and they can’t proceed and do anything else until they’ve done the thing that they hadn’t done in the first place – in this case it’s failing to meet the required number of job searches. So what happens then – and i’m just talking you through this scenario, really just to provoke a bit of thinking or as a teaser to some of the issues that have already been arising through the use of these digital interfaces. So on the next slide what happens. So this case study really is showing you, if you haven’t completed your 20 job searches in a month and you’ve got seven more to do, what happens then is that those seven are added to the 20 that you have to do in the next month. Now I’m concerned about this because this starts to make job searching even more onerous than it had been. So it basically accumulates and accumulates over one month to the next if you haven’t been able to complete those job searches. Now this is called the re-engagement requirement. So it’s a requirement under social security law, but it’s a requirement that’s decided by the secretary of the department of employment and then codified into the system so that it automatically accumulates on the dial that people are then seeing on their interface. Now i currently have an inquiry with the department of employment about the policy intent here. Is the policy intent really to add on those seven job searches, or has that somehow been included in the system because of a system design assumption about the policy intent. So I’m really interested in scrutinising the detail about what actually happens when people are using these interfaces and how the policy intent is translated into a requirement on a digital dashboard, which a person can go no further to get their payment back until it has been completed. So as I say, I’m currently having an inquiry with the department of employment and some of the detail around this might not be 100% accurate, but I have flagged it as a concern to say well, is this the actual policy intent of the targeted compliance framework or has some assumption been made that’s built this into the job seeker dashboard.

Now this scenario is really just something I wanted to use as a teaser, to think about what digital employment services look like. So I started to think about this as a digital dystopia that you can’t get past the computer saying no, until you’ve completed job search re-engagement requirements. And so that’s really just to open up some thinking and conversations about what this all might mean in employment services. So I wanted to draw your attention to an article that I had published in the AJSI, which I called towards digital doll parole, and also just the current inquiry into Parents Next it’s a human rights inquiry. And so many of you who have been interested in the Parents Next case and the application of the targeted compliance framework to the Parents Next, will have already been made aware that there are have been human rights concerns raised because of the application of the targeted compliance framework and the digital dashboard, to participants in the Parents Next program. So that’s me. Thank you, and sorry for the creaky voice.

Professor Paul Henman:

Thank you very much Simone. I appreciate that it’s really interesting to see what is happening in employment services. I was intrigued to hear the Tinder-like approach to employment matching, so I wonder whether you can swipe left or right for the jobs that you get.

So our last person contributing to the voices from the field is Daniel Turner. So Daniel is a senior solicitor from the welfare rights centre, University of New South Wales, not the university. So welcome Daniel.

Daniel Turner:

Good morning. Thank you for that introduction, Paul. And just reflecting on that presentation by Simone, I’m very grateful that I’m not looking for employment in this market, and having to comply with those obligations. It’s actually terrifying. So I work closely with people, obviously as a solicitor for the Welfare Rights Centre in Sydney, who are in receipt of Centrelink payments, and I’m going to take a bit of a risk and go a little bit off script because I’ve been reflecting and taking notes on a lot of the presentations – particularly by Paul and Christian have – brought to mind a number of examples that I confront in this space assisting people. The main concerns, if I could just kind of put them under a heading around automated or assisted decision making, is really about transparency and accountability, and the ability of people to understand the decision that’s been made and effectively challenge that decision. So, as we know or as we may know – I know Christian’s worked in the area of administrative law – one area that I guess, when I was thinking about this subject I was thinking technology. Well it’s all very complicated automated decision making, I don’t know much but in fact on a regular basis I engage with assisted decision making technology used by Centrelink officers in relation to decisions about recovery of overpayments, and the tools which are used in this area are referred to as what we know is an ADEX; a schedule or a multi-cow. And what they do is they assist the decision maker to calculate an overpayment and arrive at a debt figure. So the information is inputted in terms of income information and that is compared. Then the system compares what a person receives in terms of their entitlement, against what they ought to have received and they spit out a figure which is the overpayment amount.

Now the tool is undoubtedly helpful for officers within Centrelink and making that decision, but unfortunately when a person requests evidence of how the decision was arrived at, what they’re provided with is in fact the raw outputs of that system, not an explanation. And the document that we have to contend with as solicitors in this area, and spend a lot of time trying to work out and translate, is full of acronyms which are only known to users within the Centrelink system. They often don’t include critical information like the income that was used to actually calculate the debt, which is absolutely vital to trying to scrutinise the decision. And you know, reams of documents that are completely – a person who’s the subject of one of these decisions is completely incapable of interrogating. And then certainly even the workers and lawyers in this area have a struggle dealing with it. And certainly the AAT, I’ve seen struggle to deal with it as well. So in terms of the design of these processes, they’re all well and good but we must bear in mind that they need to be transparent. They need to be those, the people who are subject to these decisions need to be able to scrutinise it. And the fact that they cannot, impedes their exercise of their review rights. It’s very fundamental that a person is capable of understanding how a decision was arrived at, a decision that affects them. And in relation to debts, you know, the explanation is lacking significantly and this leads to a kind of a another phenomenon and Christian or Paul may have referred to this as deference to computer decision making, that recipients don’t question decisions made by computers. So when they’re confronted with a document with pages of documents which have a lot of numbers, contain a lot of jargon terminology that no lay-person would know or never heard of before, the impression is that this is very technical. It must be right, and that sort of cognitive bias is a significant problem. And we see it at all levels of the review process where internal decision makers and even external decision makers treat these documents or decisions made by debt, with a great deal of deference. And we see decisions affirmed regularly which are incorrect because they have not been scrutinised and they’ve not been scrutinised in part because it’s next to impossible to scrutinise the decision.

So, often if evidence is not led before the tribunal to show that a calculation is correct, it’s taken to be correct. And that’s very problematic when the person who would be leading that evidence would be the Centrelink beneficiary, who’s very unlikely to be in a position to be able to scrutinise the decision itself. So that’s a bit of a big bear, it’s been around for a long time and Terry sat on the tribunal – would be very familiar with these documents as well.

I did want to reflect on, the complexity of decisions in social security law mean that a degree of automation and assisted tools are absolutely necessary. But i’ve had a case very recently which involved an individual who had their payment suspended for a period of eight months and their carer was in fact someone who’d worked for Centrelink for many years – no longer did – but understood the process and was advocating on his behalf. And for eight months they could not rectify the issue, and in fact, what the issue was, no one was able to explain to him or understand what the problem was. It was in fact a lock, an IT problem. It was a lock on him receiving payment which could only be identified and rectified by an IT person going into the system and lifting it. That’s a person eight months without income support and was in absolute dire circumstances because of this. This technological problem really – and that couldn’t be understood – that couldn’t readily be understood, what was going on. So you know, that’s a significant concern.

Look I think that there’s a lot to say in this area and the discussions coming up, and I look forward to participating in that. I could go on forever, I’ve been taking so many notes, so many thoughts are arising as everyone’s talking and providing input into this. And you know, reflecting on employment services and the decisions that are made in that space, and the lack of accountability, Terry’s absolutely right. If someone comes to us with a problem with their employment service provider there’s very little that we can do about it, and that’s a real problem because people are in a position where their payment is suspended, whether decisions are often lacking in rigour and unappealable. So that’s my input for the moment. Thank you. Thank you very much for the opportunity to join into this discussion.

Professor Paul Henman:

Following these presentations, we then turn to a round table discussion about some of the issues raised. We have not provided those discussions for this recording however we were fortunate enough to be able to get a copy of professor Dorte Caswell’s recording that she was going to contribute to this event, so please stay and watch this contribution. Thank you.


Professor Dorte Caswell:

Hello, my name is Dorte Caswell and I’m going to give you a brief lecture on the Danish Social Employment Services Developments dilemmas and dialogue. I’m the professor in Sociology and Social work at the Aalborg University in Denmark. I’m based in Copenhagen and this is the beautiful picture of our campus.

So I’m talking on the basis of a couple of big research projects that I’ve done along with the professor Fleming Larsen and a lot of other researchers and municipalities in Denmark. So, the first is an innovation fund project called LISES, and since 2020 we’ve been doing this collaboration between research at the Aalborg University and a number of Danish municipalities under the headline of CUBB which is a danish abbreviation but the same sort of headline local innovation and social and employment services in English.

One of the main focus areas in this way of doing collaboration is that we do research on the unemployment services in municipalities and the focus of the research and the practice development is to make a more user-involving services. So the sort of overall headline is to develop a better and more qualified, but also very much more user-involving services in the municipalities. And one of the ways in which we’ve been doing it, is to change the way we have a dialogue between research, as knowledge producers and practice. So rather than thinking that researchers should develop a knowledge and find out what works and then hand this solution over to the municipalities, we come from a different perspective and our idea is to develop a dialogue between what is known through research and also what is knowledge in practice. So we have this overall model about mutual innovation and learning platforms where we bring together researchers and practitioners both on a manager’s level but also on a frontline level, and recently we’ve also started to develop this learning platform with clients. So we meet with clients and have this dialogue between clients, and their kind of knowledge, their kind experiences of the employment service, and the researchers. So we have I think, up till now we’ve done about 140 of these learning platforms. They take about two to four hours every time, and the sort of overall frame is that researchers come in to the meeting with particular kinds of knowledge – maybe analysis we’ve done ourselves, or knowledge, or concepts that we have gathered from the existing research practitioners – come in to the meeting, to the platform, with their interest in the particular area that we will be discussing in this platform. So this is a way of developing knowledge.

We as researchers very much take an in-depth understanding of the nooks and crannies in the practice – all the dilemmas that occur in the day-to-day practice with us – and can take that into the research, and develop our analysis with this knowledge. But the practitioners also very much take away new reflections from the meeting in these platforms.

We talk about knowledge as something that disturbs practice and moves their perspectives in a way that enables them to see perspectives of things they haven’t necessarily been aware of before. So this is an ongoing movement between knowledge in research and knowledge in practice. And the development in this particular direction of user involving services – in the Danish municipalities as I said – we’ve done this for a number of years, and the status at the moment of the data, is not even updated. So we have about 125 observations of meetings with clients in the different municipalities. We have a lot of meetings that we’ve observed in the municipalities management meetings. So professional team meetings by the frontline workers, etc, etc. And then we’ve done these learning platforms, and as I say, this is not updated. So I think we’ve just turned 140 of the platforms, a lot of interviews with clients. Again, slightly un-updated. I think we’ve had 45 interviews with clients at the moment, and then we have these learning platform clients, and we’ve also done a number of what we call positive deviance cases. So clients who have previously been vulnerable and far away from the labour market, but who have managed to get into the labour market either on special terms or on normal jobs, or even some of them in education. So basically clients that have come from a very marginalised position, have entered this politically and societally preferred position of participation in the society in education or employment, and we’ve interviewed these clients. We’ve also interviewed their frontline workers and tried to find out what has happened in these processes – are there any sort of common explanations of this pattern of moving towards employment and education?

I’ve published an article along with Sophie Danneris and that can be found on the website. Anyway throughout this process of working with the municipalities, we’ve had very close dialogue with the management, both in the particular municipal organisations, but also across the municipalities where we arrange annual seminars. And we’ve also done a lot of talks with the local political committees and an important point to make here is that throughout the whole process – and this is going back to 2016 – we’ve had these municipal organisation with complete full and open access for researchers. So basically, we can enter any kind of arena in the municipalities which enables us to understand the practice in ways that we have previously been unable to.

Just to give you a brief tour of the developments in the Danish Society Social Employment Services, so just starting from the 2000’s. There’s been an increasing focus on these social disciplining approaches on labour market participation and sanctions. As we’ve seen throughout many parts of the world, and also an increasing focus on the role of the labor market, both the employers and actually activation in companies individual placement support, etc. We’ve also seen a constant expansion of the target group, also a thing that’s not particularly Danish, we’ve seen that throughout the world. We write about it in the book that Flemming Larson and I edited along with Peter Kupka and Rik Van Berkel, that we see this expansion of target groups in many different parts of the world. But that means that at the moment we have in Denmark, a client group where quite a big proportion of clients would previously be identified within the realms of social policy but are now within the employment services or benefits, go through the employment services.

2007 to 2009 is an important point of time in Danish history and the municipalities take over the employment policy. Previously, the job ready ensured unemployed people were the state business and the cash benefit recipients were the municipal business but, since 2007 and fully in 2009 the municipalities have full responsibility for the whole of the employment service in Denmark. Interestingly and slightly absurd, this whole organisational change was based on a deep distrust in the municipalities, and so along with it came a huge governance structure with benchmarks with extensive data collection about performances, a lot of regulation of processes built into legislation, etc, etc. And talking about digital elements of employment policy, this is an important element in the Danish case, that this kind of governance has only been possible because the Danish society is thoroughly digitalised and there’s data on everything we do, based on social security numbers, on individual levels, so there’s a lot of data to draw out and make these or base these benchmark, etc on.

2016, there’s a change in the reimbursement of the municipalities. So the money that the communist policies will get back from the state level, previously it was initially it was basically the reimbursement were built around activity level. So levels of activation, percentages of active clients over time, was the initial point of departure for the reimbursement, followed afterwards by a sort of what works phase, where it was very focused on the kinds of activities. So labour market oriented, activities would give a higher reimbursement, and also a number of meetings with clients would give a higher reimbursement. But since 2016 that all changed and since 2016 it’s only been based on time. So we have what we call a reimbursement ladder. So the first year there’s a relatively high reimbursement from the state to the municipalities, afterwards the level falls and the municipals are left almost entirely responsible for the financial side of clients. But also in this, since 2016 and onwards, also part of the whole push towards new public governance, we’ve seen an increasing focus on co-production and increasing focus on the role of the client, and this idea of using user-involving services. And at the moment as we speak, there’s definitely a policy area on the move, the present minister of employment policy has launched that there’s going to be a big political reform after the summer holiday and actually the first part of this political agreement has just been decided prematurely, really, because it came before it was decided that it should be coming. So part of that is saving money, mainly on the group of young clients. Particularly the ones with mental illness. And also on the refugees and immigrant group, the elder group. That group is the sort of main areas, that will be where they want to save money and they want to move some of that money onto ensuring that people who have worked hard are more easily accessed to early retirement pension or disability pension. And also to even out the kind of financial support for families. So this is very much on the move at the moment.

Just to give you a very brief detour about the digital elements of the Danish Society in Employment Services, I’ve put in these three links that are built within the PowerPoint. I’m not going to go through it, but it’s obviously all in Danish apart from the job effector. The first one is in English as well, but the other ones are in Danish. But it will give you an opportunity to click on the link and see the sort of digital setup of these platforms. But I put these three different digital arenas on because I think they are interestingly different elements of what goes on in the Danish Employment Service in terms of digitalisation.

So the first one is a knowledge transfer attempt from the central government, the Danish agency for labour market and recruitment, with a strong focus on evidence. And it basically collects randomised control trials and quantitative effects studies, and it wants to create an easy to use platform for the municipalities to click into deciding what is the right measure to use for particular client groups. I could say a lot of critical stuff about this but I won’t at the moment. It it is not widely used in the municipalities, the central government agencies I think still are quite proud of it but it’s been challenging in terms of use.

Yes, I won’t say any more about that. The second platform is a very influential, a very strong data source. It’s used for lots of things. So the municipality uses it all the time themselves, they use it to keep track of their own development. How many cast benefit cases do we have, is there a development over time, how many sickness benefit claimants do we have, what about our level of disability pensions, is that going up or down, et cetera, et cetera, et cetera. This kind of platform is possible because of the Danish digitised society. So as I said before, we have a social security number which means that everything is collected and I know that in some countries when we explain about this sort of data-collected society, that we have it, sort of scares people. But I think from a Danish perspective, we have strong faith and a lot of trust in our central government, so it’s not really discussed in the Danish society, that it’s problematic, that these data are available. But in this context, it enables quite a high quality of data to be used both on the municipal level, but also very much used on the national level. So this job is used also for the benchmarking, and very often used for naming and shaming benefit of use. So basically the Minister would come out and say well this municipality and this municipalities are very low on the ranking. This municipality does not use sanctioning enough, and so it’s used on many different levels, and also used for research often.

So a strong influential data source that is also sometimes challenging in the way it’s used. And then the last level I will talk about is the individual level. So there’s something called My Plan, which is a digital overview of all information about plans for employment service for each individual client, and the client him or herself, can access this. So anything that’s written in My Plan is available for the client him or herself. Some municipalities that we work with are very actively using this My Plan, also in a way to change the way they write about clients, to include the client themselves in this digital knowledge about themselves. So it’s part of the job net, which is also that the digital platform here developed. But this is basically meant for the individual to have access to what goes on in the welfare state that regards themselves. And we have, as Danish citizens, also access our health data, et cetera, et cetera. So this is a natural element of a Danish society that is thoroughly digitalised and where we have an e-box, where everything that is sent from the central welfare state to us, is accessed.

We have our payslips there; we have everything on this e-box. So, the job net and the My Plan is sort of the employment service part of that individual access, to knowledge about myself that the state or the welfare state – the municipal level – will hold about me, and write about me. So a central element or dilemma in the danish employment service is a lack of responsiveness that is addressed very strongly both on the central level and on the municipal level. It challenges the legitimacy of the current system. At the moment we have, across all parties and in the public opinion, this demonised job centre. So basically, the job centres are pointed towards the problem, but there’s a reason that this lack of responsiveness has built up over time. It’s a challenge, it’s definitely a dilemma, but it’s there for a reason. So over time, municipalities have been forced to develop a focus on implementation and production. So there’s a strong bureaucratic financial and performance management control from the central level that I’ve already addressed. There’s also performance and result-based management and very comprehensive monitoring, partly made possible by this job insights.dk, that is available and visible for everybody to see.

There’s been over time, and is at the moment, extensive legislation. I think there there’s a famous Danish politician McCrae de Vester, who is now in the EU, once said that they use a shovel to create new legislation and they take away a problematic rule with a pen.

So basically it builds up over time. So there’s been numerous attempts to simplify and de-bureaucratise, but overall it’s still a very extensive legislation. And what we see is that this all tends to reduce the ability of the system and the frontline worker to actually listen to the client. I’ll go very briefly towards this.

So this is just going back to where I started with the coupe and the leases projects. These potentials are what we have been focusing on in our research and also what we have been working on in our dialogue with the municipalities, and rather than come up with the quick fix solutions as a researcher, what we tend to do is to open up dilemmas and understand dilemmas and create the opportunity to develop a reflexive practice around balancing out these dilemmas. I’ve written an article along with Flemming Larsen that goes through all of this, so I won’t be going into it but the important point I think here to make, is that rather than seeing this as developing particular projects, particular measures that work particular focus points, the whole point of what we are trying to do – and I think we tend to say that what needs to be done is to make system innovation. So we need to address all of these levels at the same time which is challenging obviously, but in order to make change and create more user-involving social and employment services, we need to address all these different kinds of areas.

So just to finish up, I’ve talked about this dialogue between research and practice, and what have we found so far. So one point is that moving from the focus on single interventions to these broader political organisational contexts, moving from project innovation to system innovation, tends to create an opportunity for more development in the municipal organisation, in this direction of user-involving services. It’s also an important point that this is done through the dialogical knowledge production. So the first digital platform that I talked about – the jobeffector.dk, which is the evidence-based resource for gaining knowledge about what works – is a very strong knowledge transfer idea that you’re building. In a particular system, the access to what works, but there’s resistance and lack of use of this kind of knowledge. So while this knowledge might be very relevant and may have the opportunity to change, it doesn’t get implemented. So the idea and we lean on vast knowledge research- on knowledge mobilisation – also from other areas, the health sector is a lot of knowledge about a lot of research done about knowledge mobilisation, social area etc.

So one of the ways we’ve been doing this is that we’ve worked with particular frontline workers that we define as knowledge brokers. So frontline workers who are more closely engaged in the dialogue between researchers. At the moment we have about 50 knowledge brokers in five municipalities that we work with, and we see the knowledge that’s developed and picked up from research is very quickly moved into the organisation, because these knowledge brokers carry this research knowledge into their everyday practice. Also, the problem-based learning approach is central to the way we produce new research. So we tend to, when we have these platforms, we discover problems that need analysis, and it is a discovery of problems that sort of shared between the organisations and the researchers. And then the researchers can take these problems back to our research desk and develop new knowledge, find out, you know, do literature reviews about what we know about that. But because its problem based, then the municipal organisation are very interested in getting this research knowledge back, once we have developed it. So there’s, as a smoothness in the how we move between these different universes of practice and research, and I think the third and last point is this necessity to have active participation from frontline workers and clients when designing and developing a social unemployment service. So rather than leave out the frontline, leave out the clients, they need to be part of the development overall.

Thank you.