EVENT DETAILS

ADM in Disability Services and Accessibility: Mapping What Is Happening and What We Know
20 September 2021

Speakers:
Dr Lyndal Sleep, UQ
Prof Gerard Goggin, Nanyang Technology University, Singapore
Prof Jutta Treviranus, Ontario College of Art and Design, Canada
Prof Karen Fisher, UNSW
Justine O’Neill, CEO, Council for Intellectual Disability
Emeritus Prof Terry Carney, University of Sydney
Watch the recording
Duration: 1:21:41

TRANSCRIPT

Dr Lyndal Sleep: 

Welcome everybody to automated decision-making and disability services “mapping what is happening and what we know.” This event is run by the University of Queensland node of the ADMS Centre of Excellence, and I am Lyndal sleep, who’s a postdoctoral research fellow in the Centre, focusing on automated decision making in social services. The chief investigator of the UQ node is Professor Paul Henman, who is also my supervisor and I’d like to thank Paul and the Centre of Excellence for the opportunity to share this session today. 

 

This is a closed event, but the recording of the presenters and the presentations will be available online on the Automated Decision Making and Society Centre of Excellence website, and the session will be developed into a centre discussion paper. Thank you particularly to our attendees from government and from the community sector. Your input and perspective is vital for the success of this event, and we appreciate your time, expertise, and value your contribution very much. I’d also like to welcome members of the automated decision-making and society centre of excellence, whose expertise and contribution are appreciated as always. I’d also like to thank our Auslan interpreters, Enro Webb and Cat Edmonds from Auslan services. And the event will also be live captioned to enhance accessibility. 

 

I’d also like to acknowledge that this week is the international week of deaf people, which was first launched in 1958 to commemorate the first world congress of the world deaf federation. This Thursday is the International Day of Sign Languages, so the timing of today’s event is significant. Finally, I’d like to thank Jodie Brown, ally Story and Terry Carney for their invaluable contributions in organising this event. In particular, thank you Sally for running the technical aspects of today’s session. 

 

Now first it’s important to acknowledge the traditional owners of the land on which we meet today. I’m currently on the gold coast which is Yugambeh country. St Lucia campus where the UQ mode of the automated decision making and society centre of excellence is based, is on Jager and Turrbal country, and we have participants logging on from various traditional lands across Australia. We pay our respects to the traditional custodians, ancestors, and their descendants who continue cultural and spiritual connections to country. We value their contributions to Australian and global society and recognise their value. 

 

Okay first I’d like to take a few minutes to provide an overview of today’s event. Now to begin with I’ll provide a very brief introduction and then we will have three esteemed international academics presenting on what we know is happening in automated decision-making and disability services, and what it means. We are fortunate to have Professor Gerard Goggin from the Nanyangn University in Singapore. Professor Jutta Treviranus from Ontario College of Art and Design in Canada. Professor Karen Fisher from the University of New South Wales social policy research centre. Due to time differences and other constraints professor Treviranus is unable to present to us live but has recorded her presentation and will endeavour to log in later for the discussion. These presentations are followed by two voices of the field, Justine O’Neill CEO of council for intellectual disability, and also Emeritus Professor Terry Carney from the University of Sydney. Finally, there will be a roundtable discussion among all attendees exploring the themes of the presentations and moving towards developing a research agenda for automated decision making and disability services. 

 

Now today’s event is a discussion with academics, practitioners and policy makers designed to map what we know about the use and effects of automated decision making in disability services around the world in general, and in Australia in particular. It is the third event in a series on mapping automated decision making in the social services. Previous events have focused on child protection and social security employment services. The purpose of this event is to develop a shared understanding of key emerging issues with the intent of shaping the research agenda for the ARC Centre of Excellence and automated decision making for our research centre, and for discussions in general. Automated decision making refers to digital technologies used in decision-making processes like algorithms, artificial intelligence, and machine learning. Digital technologies are increasingly being used in disability services, and the disability sector, to automate parts of decision-making processes. The disability sector’s relationship with digital technologies is complex with some technological innovations improving accessibility for people with disability, while others create new barriers of exclusion. Although the sector has increasingly adopted a human rights approach, working with people with disability rather than on them- people with disability are still often excluded from decisions about their own care and well-being. Increasing automation in decision making in disability services provision promises timely tailored and accurate decision making but risks further excluding people with disability from decisions that are made about their care, and day-to-day life. 

 

Topics we hope to cover today include where ADM is being or touted to be used in disability services and accessibility. In what way is automated decision-making being used in disability services? How do professionals and administrators engage with such automated decision-making? How do people with disability understand and experience processes that involve the use of automated decision making? And what data and recession knowledge are used to develop ADM? 

 

Our first presenter today is Professor Gerard Goggin. Gerard is the Wee Kim Wee Chair in Communication studies at Nanyang Technological University in Singapore. He is an internationally renowned scholar in communication, cultural and media studies, whose pioneering research on the cultural and social dynamics of digital technology has been widely influential. Key books include Apps in 2021, Global Mobile Media 2011, and Cell Phone culture in 2006. Professor Goggin also is a leading researcher in the area of accessibility, inequality and digital technology, especially relating to the cutting-edge area of disability. With books such as Digital Disability 2003, Disability and Media 2015, and the Routledge Companion to Disability and Media in 2020. He is currently researching disability and emerging technology in Asian contexts. Thank you very much Gerard for your time and expertise presenting today, and i’ll hand it over to you. 

 

Prof Gerard Goggin:

Thanks so much Lyndal, and thanks Paul and others for organising this today. It’s great having had some involvement with some of the early thinking on the ADM centre, to re-engage, and it’s an incredibly important topic, i think it’s an amazing opportunity to really have some systematic work in this area. So let me just share the screen, hopefully people can see that. Okay, so what i just wanted to talk a little bit about today, is to try and take up I think, the rubric of the session about, what do we know what’s happening? and to do that from a very, I feel, partial perspective. So, i was previously working at the University of Sydney on Gadigal Land for a fair while, and moved to Singapore about two years ago. And so, in Singapore I’ve been particularly working with colleagues around disability and emerging technology, and we have a number of us- my colleague Wong Menge and others, and others including Yuta as well, are working on a small project on disability and AI in Singapore. And actually, the topic of today has been something we’ve been sort of wondering about having, or trying to figure out where we go next in terms of our exploratory kind of work in this area. And to look at things like ADM. So, part of that is sort of in the back of my mind thinking about this session today. So the title of this is a bit wordy is to try and get to what I think- it could be completely wrong, intrigued to hear.

 

I think there’s a lot happening in ADM and disability services in diverse sites, and it needs to be better conceptualised, more integrative and collaborative, and you know, being kind of offshore the international component i think is really interesting. And i think there’s real resources there around bringing together disability and critical technology work, and law and policy as well, and so that’ll be i think really important for us to do but a great opportunity. And at the get-go i mean i’m particularly interested and we are, here in Singapore, collaborating with the Center of Excellence around ADM and Asian context, and i think disability is a really great cross-cutting case in that regard as well. Where I think there’s really interesting things happening in say Southeast Asia, but they’re not really emergent in many ways. So, i think it’s extremely important and interesting okay, so to firstly you know, what what do we know i think in research. You know, not much. And i think where we do know things, it’s in areas where disabilities come into sharp relief for specific reasons. We’ve got fortunately, we’ve got terry talking later on the panel who’s done extraordinary work on the Australian Robo debt case that people are probably aware of given our group today, of people being sent via automated decision-making letters asking them or requesting or ordering them to pay back a putative overpayment of welfare payments. Karen Soldatic and I have just written a piece on the intersectionality of disability of that, and the way that that’s hit particularly Indigenous Australians with disability, and so on. So, i think that’s been a really appalling case, but a really interesting one. And i think you know, elsewhere around the world you can see there’s potential here, that clearly there’s critiques of service admin and governance systems where they’re clear or likely disability implications. And i think this for me was the Robo debt case really sort of made this large. But in other places that’s not so easy to uncover, and some of what’s going on is a bit harder to kind of come to. I think where we do know things it’s a serious worry. It seems the implications that people are, you know, people with disabilities being referenced in a number of- if you look at the literature, there’s lots of references to just people with disabilities, being added to other groups of people who might be affected, you know. Implicated by ADM but there’s not a lot of unpacking systematic work of that. But i think the implications are really concerning, and for a really diverse range of people. And there’s implications for other kinds of automated decision-making initiatives and systems.

 

 So, I’ve just got a couple of quotes here for a couple of recent pieces one from Ryan at Citron, the automated administrative state just pointing out that “US agencies are continuing to adopt third-party vendor automated systems that defy explanation even by their creators.” And then a piece by Marks on algorithmic disability discrimination just pointing out that “AI disrupts the traditional flow of disability related data to promote algorithmic disability discrimination… Health data has the potential to harm people if it’s used to exploit rather than heal and can reduce the autonomy of people with disabilities”.

 

So just two of not many explicitly directed papers in this, although there’s quite an emerging literature on AI algorithms and discrimination. Actually, there’s quite a lot of papers just starting to pop up in the last year or two, and that’s really important work but i don’t think it’s necessarily focused on ADM. So let me just say that okay, so what is happening you know, and i think i’ve put this where, in what context you know with and to whom. So again, fairly speculative i think in terms of in some ways just thinking out loud and wondering. I think you know, probably a lot, part of the context is the long histories and rise of automation in decision making from computerisation onwards, and even before that. And so there’s a lot of accounts and mentions and things being addressed in the early 2000s, for instance. Things popping up, and i think part of this is nicely captured in a paper by Torren Holton Langstroth, they’re talking particularly about health just saying in some ways you’ve kind of got algorithms between a logic of disruption- you know something new happening, something innovative, something new going on on a logic of continuation here, and this is perhaps part with what we’re struggling with. It clearly is a fair bit happening in services explicitly targeted to people with disabilities. I understand the Australian context this has risen-up particularly because of the politics which are obviously really consequential and have been widely debated, about what’s going on with assessments right. In the sense in where it seems there’s a moment in Australia where the promise of the NDIS are again being threatened by some of the systems in place and the decision making is really at the heart of that. I’m sure people know much more than I, about that, especially Karen who’s going to talk later on. But i think here the equation is being made with say Robo debt and obviously the robo is handy, rhetorically in that Australian context, to call out some of the dangers with Bruce Bonner Haiti amongst others talking about this. 

 

I think there’s another part of it, I suppose, in relation to the more holistic account of decision making, is conceptualised in disability rights. And Justine O’neill is talking later on the panel, i’m sure she will speak to much more length about this. But these concepts in the sense in which disability scholarship, activism, rights and so on, and disability legal scholars particularly, have problematised and called on us to reconceptualise decision making, right. In terms of the quality capacity support of decision making. And NDIS has an object around that to particularly, around enabling people with disabilities to exercise choice and control at the heart of the scheme right, at the contradictory heart of the scheme. And so, i think you know, this is really important. This is obviously something where disability concepts, and work and experience, can speak back to automated decision making as a broader area because of the sense in which the interdependent concepts that come from disability studies around decision making, and the the body of work and practice around this law and policy, sense as well are incredibly important. So, i think in lots of other relevant disability services, i think there really just seems to be a lack of explicit acknowledgement and conceptualisation of people with disabilities. I mean no surprise here because this has been something many people have been working on for many years, to try and point this out. But digital health for instance, if you hear the typical presentation on digital health which there are many, and it’s a very interesting area, there are all sorts of implications for the digital health services and the way that’s conceptualised for disability seem to be rarely unpacked, rarely talked about. Obviously, there’s a great opportunity here for the COE with its health and social services programs, already probably talking across those.

 

Mental health; another area where there’s been you know quite some work emerging around ADM, automated decision making, and around AI and algorithms. And this is clearly an area with a lot of intersection between disability and mental health as well. Intersectional disabilities, probably all is intersectional, but intersection other kind of profound ways. This i think raises a whole bunch of issues as well. So, i think this is really kind of interesting, I suppose in an Australian context. Often the focus is on NDIS right, as a key scheme. And you can see why that’s the case, but clearly disability services are across state levels, all those states try to evacuate their responsibilities a bit so the commonwealth can rush in. But across all sorts of domains. So, i think in all of this you know, in terms of what’s happening, I think Australia is actually a really interesting case internationally because of the salience of NDIS, but also more broadly, some of the frameworks in Australia that still allow information to be gained via media reporting Government. Social media’s in other countries, that’s also the case, but in some it can be hard to gain access to developments right. And i think that there’s another kind of disability conceptualised differently, in different parts of the world whether there’s international aspects, but it’s organized in different histories, different categories, different frameworks of different kinds of data. And so, many aspects of the provision are distributed across different domains. It’s very much the case here in Singapore right, there might be a lead government agency SG Enable, but there’s a stronger role probably in many ways for not-for-profits.

 

The charity sector for ideas of social entrepreneurship, that tends to be you know, in the society here people lead with those kinds of things and then disabilities are emergent in a different way. And disability rights is not the framework. The kind of reflex framework in say Singapore, it may not be in other Asian contexts, it may not be as widely even kind of accepted and spoken about. So, you’ve got different things going on across different societies which i think makes this a very interesting kind of area to try and think about. So look in conclusion that probably more indeterminacy and you know, wishing for kind of more research, and more clarity around this, but i think the effort is incredibly important and i suppose some of the things that we’ve been thinking about in Singapore is just some of the research strategies how to gain, how to partner up with different agencies around this, and to deal with some of the sensitivities that may crop up. How to just have, to actually get a wider discussion going on in policy circles, in the public domain as well, and what some of the strategies are. And how to bring together the different facets of disability services in a broader sense. And how to then draw up connections with things like digital health which have been getting a greater, more kind of attention in both research, and i think in the broader public sphere of public policy. So, i hope that’s useful, i’ll stop there. Thanks very much, I look forward to hearing what everyone else has to say. 

 

Dr Lyndal Sleep:

Thank you so very much Gerard. That was fabulous and very much appreciated. Our next presenter is Jutta Treviranus. Jutta is director of the inclusive Design Research Centre in Ontario Canada which she founded in 1993 

 

The mission of the Center is to proactively ensure that emerging technical systems and associated practices are designed inclusively. She is also a professor at OCAD University in Toronto where she established a graduate program in inclusive design. Jutta is the head of the inclusive design institute which is a multi-organised centre of expertise, and the co-director of raising the floor international. Jutta is credited with developing an inclusive design methodology that has been adopted by large enterprise companies such as Microsoft, as well as public digital services such as the Canadian digital service Jutta has played a pivotal role in developing international accessibility standards and regulations, such as the ISO the IMS access for all standard, The W3C web accessibility initiative, authoring tool accessibility guidelines, and the accessibility for Ontarians with Disabilities Act. The recent work has focused on the treatment of outliers and small minorities in decision systems. Due to time difference and other differences and other constraints, Professor Treviranus is unable to present to us live, but was kind enough to pre-record her presentation and will endeavour to log on later for the discussion. 

 

Prof Jutta Treviranus:

It’s an honour to join this important conversation. I’m sorry that my schedule and the time zones meant that i couldn’t deliver this synchronously, and i hope to be able to link into answer questions. I have titled my brief talk “decisions, outliers and small minorities”. 

 

As context I’m the director and founder of something called the Inclusive Design Research Centre in Toronto, Ontario Canada. I’m lucky to have the support of an amazing team of permanent researchers, a large global community, as well as students and graduates of the program I started. 

 

Through iterative trial and error, I’ve summarised our approach as three dimensions of inclusive design. The first is to recognise that everyone is diverse and variable, and the greatest expertise regarding that diversity rightly vests with the person themselves. Secondly, we may need to make sure that our processes themselves are inclusive, and we design with people with lived experience. Most importantly people who can’t use or have difficulty using the current design. Constantly asking who are we still missing? And the third dimension is the realisation that we function in a complex adaptive system. There is no completion or fix; no design decision is made in isolation. We need to consider the complexly nested context of our designs. 

 

My focus over the past many years in all decision systems is outliers and small minorities. I have a bold assertion that one key to our survival lies in the ability to address the needs of outliers and small minorities. 

 

And the universal outlier is disability. Disability is at the edge of all other justice seeking groups. This ability presents an entangled bundle of everything that can go wrong with current AI systems. 

 

one thing to note about disability is that the only common data characteristic, is sufficient difference from the average, that things are not designed for you. 

 

In disability you have the culmination of diversity, variability, the unexpected complexity and entanglement, and the exception to every rule or determination. I came to this realisation in 2015 when i was asked to assess a number of machine-learning based decision systems, used to guide automated vehicles through intersections. And some of you may have heard this story before. So, these automated vehicle decision systems were used to decide whether a vehicle should proceed, change direction, or stop. I tested them with a capture of a friend who unlike the norm, pushes her wheelchair backwards with her feet and to my surprise and horror, all the learning models chose to run her over. 

 

I was assured however, that the learning models were immature and simply needed more relevant data about people in wheelchairs and intersections. We I retested the models trained on much more data about wheelchairs and intersections, all the models chose to run my friend over with greater confidence. 

 

I realized that this was a recurring pattern and that its origins were not just in machine learning systems, but that there was a rationale within data and data analytics, and how we make decisions over the past 30 years I’ve been collecting data on diverse human needs. The only way i can plot this is using a 3D multivariate scatter plot. So in this multivariate scatter plot, what i found was the needs of a population when plotted, looks like a starburst. I’ve dubbed it the human starburst. Like a normal distribution, 80 percent is clustered in the middle, and taking up about 20 percent of the space. And 20 percent is distributed to the periphery in the remaining 80 percent of the space. And the data points in the middle are close together, meaning that they are more alike, and the data points at the periphery are further apart meaning that they are more different from each other. 

 

And what you would probably surmise is that the further you are from the middle, the more likely you have a disability, or you identify as someone with a disability. And as a result, any statistically determined prediction is highly accurate in the middle, inaccurate as you move from the middle, and wrong as you get to the edge. 

 

And the same pattern happens for design. If your needs are needs that are conventional, average, then design works for you. If you are not quite average, or you differ from the average or the statistical norm, then design is difficult to use. And if you’re out at that periphery where many people with disabilities are, then you can’t use most designs. 

 

And I realise that the problem predates AI as far as Catalan, and the invention of the average man AI mimics probabilistic reasoning, statistical analysis, and linear logic models. Which tend to reduce diversity complexity and variability. 

 

And the most pervasive AI at the moment is not the helpful or harmful robot that has captured our imagination, but a mundane, but ubiquitous power tool that currently either makes or guides choices in most of the critical areas of our lives. Who is hired, what medical treatment we receive, whether we get a loan or credit, whether we get into college, whether we are a security risk, whether our concerns receive attention in media or in politics… and I’m worried about the AI tools that are deployed? To make human choices more efficiently following the formula it is presented more accurately at a faster rate and a larger scale. 

 

The AI that is used to optimise past successes. 

 

AI and optimizing past patterns of success more efficiently, also amplifies, accelerates, and automates past discrimination. Because of the efficiency, speed, and scale of AI. And because of the vicious feedback loops it sets in motion. This discrimination increases exponentially with each cycle of machine learning, speeding us to greater disparity, and pushing us to greater homogenisation. 

 

Before these seemingly mundane decisions were mechanised using AI, we could make exceptions, appeal to qualitative human judgments that marred of course the quantitative perfection. With current AI this possibility is removed, and AI exponentially amplifies the resulting disparity. And we have to remember the McLuhan saying that ‘first we shape our tools, then our tools shape us’. The lived experience of many people with disabilities because of this, is to make the best of a system that isn’t made for you. Most automated decision systems are intentionally designed to exclude people who are outliers. Even assistive technologies using AI such as instructional tutors for struggling students, voice recognition systems, pattern recognition systems to assist or replace sight, don’t work if you’re at the edges. The irony is that the people that need them the most are the people that have the greatest difficulty or can’t use them. So, the question to ask of our decision systems, whether they are traditional decision systems, decision systems that are guided by data, decision systems guided by AI, or automated decision systems, is what happens to the exceptions? What we know from disability is that everyone is an exception. We equate evidence with majority repeatability. And statistical probability if you’re not like the average probability, is wrong. We equate impact with a single measure for the large, just homogeneous number. What about heterogeneous groups that need different measures in our politics? Where we have reduced democracy to one person, one vote, and majority rules, the trivial needs of the majority often trump the critical needs of the few. What we see now are the cobra effects of not attending to diversity, variability, and complexity. And we’re falling into the rut of mono-causality when the causes are complex and entangled. 

 

There has been a great deal of attention lately to AI ethics. We have guidelines, auditing tools, and even regulations to address bias. 

 

With the rise of the concern about AI bias, it is important to note that it is more than data gaps and algorithmic bias. Even with full proportional representation and data, an elimination of all human bias and algorithms, current AI will still be biased against minorities and outliers. 

 

And AI auditing tools are misleading in that they don’t detect bias against outliers and small minorities. Or anyone who doesn’t fit the bounded groupings. AI ethics certification makes it even harder to seek justice for people who are not considered. 

 

While people with disability are most vulnerable to data abuse and misuse, privacy protections don’t work. If you’re highly unique you will be re-identified. And current strategies like differential privacy, remove the helpful data specifics that you need to make the AI work for your unique needs. And unfortunately most people with disabilities barter their privacy for essential services. 

 

However, i think there is an important opportunity. The pandemic has taught us that we live in a complex, adaptive system in accelerating flux. Everything is rife with feedback loops, entangled and connected, and cannot be isolated. Change is viral, not engineered, and this complexity is only increasing. And one inside of complexity theory is that we’re currently stuck on a local optimum. To find our way out of this, and avoid the next crisis, we need to reach the global optima. 

 

To do this we need to include the people that are at the edge, who have a better view of the whole terrain, are more diverse, and are not invested in failing strategies. Not the people in the complacent middle. None of us are safe until all of us are safe. And designing our systems to address the needs of outliers and small minorities will benefit everyone. 

 

If we ignore the edge, it harms all of us. Because out at the edge is where we find innovation and detect weak signals. People with disabilities are the best stress testers of our decision systems. They are in a sense, the canaries in the coal mine. 

 

And we need to abandon the idea that there is one winning solution, fix, or best practice entrenched in our predictive optimisation systems. Our alternative in the IDRC is the virtuous tornado planning process, whereby we iteratively stretch to reach the outer boundaries of our human starburst. 

 

In our we count project we realised that we need to start at the edge and not deny diversity or complexity. We’re working to ensure that people with disabilities have access to shaping data science. We collectively address data gaps and biases. We are co-designing protections against data abuse and misuse, and we’re creating more equitable decision supports. 

 

We provocate with models like the lawnmower of justice, where we take the top off the gaussian curve to remove the privilege of being the same as the majority. We’re flipping algorithms which bias toward the average and optimised past successes. Instead, we’re focusing on the edges like in our inverted wordle, where unique and minority words are centred and grow. We’re creating alternatives as well, to things like resume filter systems, which promote culture fit, choose people from the same university, or people like the favourite employee. Instead, our system pushes against monocultures by promoting diverse perspectives, and we’re exploring bottom-up, community-led data ecosystems where data is gathered by the individual and contributed to the cooperative data trust. Establishing thereby, bottom-up reciprocal, community-led data ecosystems. 

 

And creating tools to reduce harm by signalling when a model will be wrong or unreliable. And creating data set nutrition labels that declare the provenance of a data set, what data it includes and what proxy data is used to replace the real thing. 

 

We realize that intelligence that understands, recognises, and serves diversity, may help lifts us out of our current crisis. 

 

Thank you and I hope to continue the conversation. 

 

It’s an honour to join this important conversation… 

 

Dr Lyndal Sleep:

Thank you so very much Jutta. It was very much appreciated. 

 

Next I’d like to welcome Karen Fisher. Karen is a professor at the Social Policy Research Centre at the University of New South Wales. Her research interests are on the organisation of social services in Australia and China, and disability and mental health policy, inclusive research and evaluation, and social policy process. Karen applies mixed methodology and adopts inclusive research methods with people with disability, families, policy officials, and service providers. Thank you so much for presenting today Karen and for sharing your time and expertise with us. I’ll hand it over to you now. 

 

Prof Karen Fisher:

Thank you, Lyndal, and thank you very much Gerard and Jutta for that, your papers, which i think lead very nicely to mine, which is much more embedded in some of the voices from our research about the risks and problems of introducing automated decision making into this type of context. I’m just going to share my screen and hope that works. 

 

Right so as Lyndal introduced, my work is empirical research with people with disability. And today i’m going to give two examples of the extremes of why managing decision points within a social service system like this national disability insurance scheme, is so difficult and i would argue problematic. 

 

Just so, and I’d like to preface that by saying that i think one of the reasons why it’s so difficult and problematic, is because in a country like Australia we have such a rationed social service system. If we were speaking about automated decision-making in a system more like a Scandinavian system, where there’s more of an assumption of rights and entitlement to social services, then perhaps automated decision making might be more feasible. But what we have instead is a highly rational system. And the two extremes that i’m going to give examples of here today, are first of all, the even access to the NDIS, and then secondly managing a package of funds in the NDIS which is called a NDIS plan. And then what that, implications that that has for automated decision making. So, for those of you who are not familiar with the National Disability Insurance Scheme, the legislation is quite good, I think. It assumes that these are free services paid through a public insurance scheme from general taxation and income tax levy. It’s designed around universal coverage with a support package called a plan, or referral into mainstream. And it funds support services and equipment. Income support is entirely separate. So the individual packages or plans are aimed at about 10 percent of people with disability. That is those people with permanent and significant disability. Already it’s running at more than 10 percent, and so this identifies one of the initial problems about assumptions, as to what is disability. The package or plan can be managed by the person themselves called self-management, and the second example I’ll come back to that. Or by an intermediary, called plan management. Or by service provider or the national disability insurance agency. And the person can use it to buy services from the market either from an NGO or from the private sector. Or to employ someone themselves. So the access of course relies on trusted relationships for these people. They are people who are otherwise excluded from NDIS due to structural, social or personal reasons. So, where they live for example, or their contact with the criminal justice system. Social reasons such as not having the networks or support, or information to actually understand disability, and disability service systems. And personal reasons such as not actually identifying with having disability, which in many cases is a bureaucratic category, rather than a self-identity. And particularly those with psychosocial or socio-economic disadvantage. The intersectionality that Gerard referred to. 

 

So, one of the programs that we researched was an example of trying to assist people in this category to access the NDIS. It was called Enable In, run by people with Disability Australia. These are people who have social, psychosocial disability, and particularly homeless people. And the program ran one-to-one information and peer support groups, intensive support, and referral for people through these groups and service providers, and then assistance to people to understand actually what NDIS was and how to do it. Because it’s an incredibly complex bureaucratic system. 

 

What that program found, and i think these are the implications for whether or not automated decision making is actually even relevant to these people, or to a social service system like NDIS, was that these decisions about how to even get into NDIS were relationship based. Relationship with the person with disability, and with the service provider, that it took an enormous amount of time to develop trust. Particularly because these are people who have a long experience of unreliable, poor quality services. And people who have not respected their trust. And so it takes a lot of time for these people to understand even the relevance of disability and other social support in their life. So one person said for example, “I’ve received one-to-one time from Enabling In. I found the staff was someone easy to talk with, and to trust. They were very empathetic to me. I felt listened to and heard. After almost 10 months keeping in touch with the staff. I felt that I’m ready to trust someone to use their judgment to connect me with services.” 

 

So obviously, totally the opposite to an ADM situation. The second element that was relevant in this program was that the service networks were vital to frame decisions. So, it wasn’t just a decision about the NDIS. This is in a system, and for these people it was between disability, homelessness, and health systems primarily. But also of course education and criminal justice system, and many more. So, all of those parts of the system have to collaborate around the person’s whole life needs and preferences, to demonstrate to them a good practice in a place that that particular person was comfortable. One of the staff said “It required intensive support. It’s central to working with people from hard-to-reach communities as they all have multiple support needs. Some of the outcomes of providing the support include building confidence, empowering people in the situation they’re in, but also making time to connect with other organisations involved in a person’s care.” 

 

So I’m now going to turn to the second extreme, which is when you do finally win the lottery and manage to get into NDIS, you’ve got this choice about how you manage your package or your plan. And some of those people it’s running at about 20 percent at the moment, decide to self-manage their own plan. And the reasons that they give in some empirical research that we’ve been doing, include that they’re looking for control and discretion in their decisions and choices, and spending, being able to spend on meaningful activity and emergencies. So, you can see here, we’re seeing words that are absolutely critical to ADM. Decisions as to, is it possible actually to design a system that enables someone to still experience these preferences in their life? The second reason they chose was to avoid the bureaucratic arguments with plan managers, or having their plan managed other than through self-management. So, although NDIS is designed around having control and choice, they found that even with plan management, that intermediary situation, they weren’t able to accept or exercise their own control and choice. And the third reason was around cost and time, and responsibility. So, one of the NDIS participants said to us for example, you feel like you can manage your own life. We all manage our own life; we pay our bills. I have a few extra bills because i have a disability but I’m outsourcing those things that I need help with and NDIS is paying for that. I’m very thankful. 

 

And this person was able to say that because they arrived at the decision that the only way to avoid all of those bureaucratic restrictions- and here i draw a parallel to automated decision making was to manage it themselves. But that came at a cost. So finally, I’d like to draw some conclusions then, for what this means for ADM and responsive decision making. At the first extreme it has implications for categorising the support needs as disability, to even be eligible for a scheme. That’s rationed the concept and practice of disability support, in a context with a range of social services and whole of life needs. And so, people don’t see themselves as having a disability necessarily, and they don’t see their support needs as relating to disability. Secondly it requires strong trust relationships with peers and professionals, and it’s very difficult to see how ADM can cope in building these trust relationships. And thirdly, it requires specialist community expertise, particularly through advocacy organisations to even find these people and to develop trust. In order to be able to get to a point of saying well actually here’s something you’re entitled to, and we’re going to help you get through these systems, so that the system recognises your entitlement. The second set of implications are around managing decisions about how to suspend social services support funds. And this partly is because the purpose of support is to enhance choice and control. And that requires responsiveness and flexibility, and i would say at this stage of what we understand as ADM, is the opposite of what ADM is able to deliver. And of course, there’s a very high-risk consequences of being unresponsive in decision making, and this was reflected in many of the participants who use self-management. That they found that without self-management they were left in a very vulnerable position. Particularly in emergencies and that’s even within a system, that yet has even to use many of the benefits of ADM in the way that NDIS plans are delivered. I think it’s terribly important in today’s forum that we’re considering these first-person voices. I think obviously there are potential benefits from ADM, but our service system at the moment, particularly as a rational system that doesn’t promote entitlement, is one where we have to be hyper aware of these risks. 

 

Dr Lyndal Sleep:

Thank you so very much Karen, and also Gerard and Jutta. That completes our international researchers’ presentations, and we appreciate your input and your expertise and time very, very much. Next, we’ll have some voices from the field of disability services. 

 

We are very fortunate to have Justine O’Neill present for us. Justine is the Chief Executive Officer of the Council for Intellectual Disability. Justine works with the Council of Intellectual Disability to advocate for the rights of people with intellectual disability, and to build on the CID’s mission to create a community where all people with intellectual disability are valued. 

 

Thank you so very much Justine. 

 

Justine O’Neill:

Thanks Lyndal, and hello everyone. I’m meeting with you from Gadigal land in Sydney, and i am the person who got the invitation and went “uh what’s that? Never heard of it, is that Robo debt.” So very much voices from the field, of people that haven’t known about this area of research. So my organisation is a systemic advocacy organisation, we don’t provide disability services. And I think my reaction to this invitation kind of sums up perhaps, everything I’m going to talk about now. Which is that the earlier presenters have already talked about people being left out. And people with intellectual disability are already very much at risk of being left out of decision making about their own lives, already subject to laws like the NDIS nominee system or guardianship systems, that mean at risk of formally losing decision-making rights. And often informally have decisions made by friends, families and workers and others. So we know that people with intellectual disability are already so subject to attitudes and assumptions from the people around them, about skills and abilities, and that immediately creates a risk. To get the positive stuff out of the way quickly; with those attitudes and assumptions perhaps, there are good things about creating a system where you can avoid humans with unhelpful attitudes, and replace them with um more positive technology. Karen sort of talked to that a little bit, and i have to say that if Jutta was directing it all, i’ve learnt so much this morning, i’d be pretty much fine with it. Jutta and her starburst model look like, you know such an informed and sensitive way to work through the issues that come up. But of course, it’s the assumptions that go in, that need the critical reflection. So i’m just going to say a few things and hand over to Terry. Our questions would always be about how people with intellectual disability are, being included in co-design.

 

So I’m not a person with intellectual disability, probably the CEO of my organisation will never be a person with intellectual disability. There may not be many people with intellectual disability in this forum today, and it would be a difficult forum to be part of because I’m learning lots of new language and concepts as I’ve been here. So, the risk is always so high that people with intellectual disability won’t be in the room when the decisions are made, when the assumptions are created. So how will people be included in co-design? How will they be included in challenging what assumptions go into whatever systems are built? How will people with more complex communication needs in particular be involved? And all of that intersectionality that we’ve talked about. How will the assumptions account for the great diversity of life experience and culture, and views of people with intellectual disability. And how is support for decision making built into systems as well. I think some of those things have already been discussed. To some extent, the question of universal design and who we’re leaving out, i think the other parts of this that are important are at the other end. When systems are built hopefully in the most inclusive and wonderfully adaptive and diverse way possible, how will people learn about it and know about how systems work? Because that might be the point. Where power is lost again, where people have difficulty just even knowing what happens to your own information, where it’s kept and who’s making decisions. And how will people be supported to know what their rights are. To challenge decisions, to be actively part of knowing what decisions are going on, and how complaints can be made. How appeals can be lodged. So, what our members with disability tell me already is that technology is on the whole, never designed in an inclusive way. Hard to learn hard to resource, and never going to be the whole picture. And so, there’s a process of designing it, but there’s also a process at the end of recognising where the limitations in what technology can achieve, and how far can people with intellectual disability be involved in that technology. And will there still be options available for people where the technology just isn’t going to be suitable or inclusive enough. So that’s it for me. Thank you very much, and over to Terry. 

 

Dr Lyndal Sleep:

Thank you so much Justine, it was very much appreciated. Your perspective from the experience of people with an intellectual disability is appreciated and very important. 

 

Our final presenter today is Terry Carney. Terry is Emeritus professor of Law at the University of Sydney law school, where he was a long-serving director of research and past head of department. A fellow of the Australian academy of law, he is a past president of the International Academy of Law and Mental Health. Terry’s academic contribution and impact are extraordinary. The author of nearly a dozen books and monographs from over 200 academic papers. His most recent work is Carney, Tait, Perry, Beaupert, and Vernon Australian Mental Health tribunals: Space for fairness, freedom, protection, and treatment. Terry is also an associate investigator of the ARC Centre of Excellence and Automated Decision making in society. But today terry is contributing to the voices from the field part of this session, drawing from his extensive experience and expertise on automated decision making and disability services. Professor Carney has shared various Government inquiries including Victorian inquiries on child welfare practice, and legislation on health law. And was a member of Australia’s pioneering inquiry into adult guardianship. He oversaw the writing of the social security act and for nearly 40 years served as a member of the social security appeals tribunal, and its successor, the social services and child support division of the administrative appeals tribunal. And in addition, Terry has kindly contributed to organising this event, which is very much appreciated. Now over to you terry and i’ll stop sharing I’ll leave it to you. 

 

Prof Terry Carney:

Thank you very, very much. I’m speaking from the land of the Jerrinja clan of the Wandi Wandian people, and for people who wonder where that is, it’s down near Nowra and Jervis Bay. 

 

I like fly-ins, that was a fly-in. There are all kinds of things that i talk about, and these slides will be available you know as soon as people request them. And i guess the theme is in that bottom quote from Venning. I’m talking about the NDIS. I’m talking about two aborted; one permanently aborted, the other one may be coming back after the next election. Many of us fear ADM initiatives that were introduced into the NDIS in Australia. The NDIS is the poster boy of the poster boys. Both in the U and in Australia, automation of decision making in Government is prioritised in the social services. And so that includes social security. But within that space the NDIS was given top billing. So, it’s unsurprising in a sense that the two initiatives that i’ll be talking about have already been and gone. And have proven to be as problematic as earlier speakers you know, have touched on. In their more extended papers, I guess this Venning quote is, it’s really this tension between the personalisation, individualisation, control, and other critically important values that are legislated, as was indicated in the overarching governance of the NDIS. That on the one hand all the good goals and objectives, against the concern that government always has, which is to spend particularly in Australia, as Karen was saying. You know to ration, ration, and have even more rationing, so that you spend less and less money. And secondly, address that for public appeal, in the language of, well we’re not actually penny pinching here. We believe that it’s important as a matter of justice and fairness that we should get uniformity of outcomes in whatever the administration is here in the NDIS Now of course as Jutta has made eloquently clear to us, another way of putting what Venning is saying about that pension, is that it’s an attempt by government to eliminate the outliers. To have only administration which sits in the dense centre of that cloud burst, rather than the outer rims.

 

So that’s the theme and even to quote speaker Goggins on how the NDIS authority really is the poster boy. And this article by Park and Humphrey is very good in providing detail about the things that I talk about. You’ll find it cited a couple of times in the slides that I’ll quickly run through And you know Park and Humphrey’s conclusion back in 2019, is that there’s been virtually no understanding of the issues at stake on the part of those responsible for contemplating how to introduce artificial intelligence into the social services. And particularly in such a sensitive area as disability services, and the NDIS. And so why is it a much tougher ask in this area? Partly it’s that conceptual issue around the centre and the outliers of the starburst, but it’s also just that disability services are about not, they’re about a human relationship and trust, between the person wanting to access a service and the service provider. But it’s a very complex, it’s not just a relationship. It’s about issues where there are multiple considerations in play for each individual potential service user. In other words, now, that’s not to say that AI is unable to deal with a vast number of variables and great complexity. And so on. You know the attempt to teach cars to not run people over at intersections is an example of the complexity, but it’s why the problems are likely to be larger if you, as we have in Australia, target disability services as first cab off the rank. Rather than something a lot simpler like our income support system. And you know that reference to Robo debt and yes, yes there was a 1.8 billion dollar catastrophic imposition of unlawful debts on lots and lots of people who didn’t owe a debt at all. But it was within a really simple, by comparison, area of a service or government delivery. That’s the point that I’m making there. So, the two things that I want to talk about quickly… 

 

Karen’s already indicated how the NDIS works and I’m just concentrating on the access decision and the quantum of services. You know, are you eligible to be a NDIS participant? If so, how large or small should the resourcing that’s made available to you be? So, 20 percent of you, if you elect to self-manage, or the other 80 percent have it managed by the bureaucracy of the NDIS, just looking at those two questions of getting in and how much do you get? Getting in is based on a substantially reduced function, principally, and the quantum of support as legislated. It is determined by answering the question of what is the reasonable and necessary support that you as a participant require. In other words, it’s a very subjective test. It’s not something that is numeric, on its face. Anything but the sort of thing that looks as if it would be responsive to being turned into arithmetic. So, this next point is an important one. When the NDIS was introduced, there was no artificial intelligence at all. There were lots of humans perhaps doing their best, but incompetently doing their best. And in the first year or so as people who were within one of the state’s systems and getting disability services there were being transitioned into the national scheme, there were crude- even cruder than the ones i’m about to talk about, template plans and such. Like that were just simply imposed on people sometimes based on just one brief phone call with the person who was applying to come into the system. So that’s an important counterpoint. Human beings can impact adversely on vulnerable people’s diversity and other rights in the disability space, to an even greater degree than might be done, might be wrought by ADM. Which isn’t to say that we should be entertaining either. Just that we shouldn’t have this starry-eyed nation that, just keeping ADM out altogether, and leaving it all to humans leads. To a utopian wonderful system, it didn’t, it was an appalling first year or so of experience that people had. And Karen Fisher’s work, and other you know, deals with that. So, the first thing, the first aborted thing that happened- I’m looking at the tile to be quick, was that the NDIS said well, we’ll have a chat bot. This would be Australia’s first chatbot and it was a very sophisticated chatbot. It was a machine learning one that works with really big data sets to train the chatbot, to in this case, be an alternative to the human being. 

 

There was a reference to that just a second ago in the previous session that you know, somebody with autism for example might actually prefer to have a machine interaction in place of a more problematic engagement with a human being. So, this one was being trained to take up some of the work of people who are contacting the NDIS to see whether they were eligible for the scheme, and then to interact with them once they’d been admitted. And they had a package of services and so on allocated to them and it used, it was going to use natural language. And it was going to amongst other things, be able to engage on an emotional level with a person. So, for instance, for somebody like Sydney Swan supporters like myself, it was supposed to, it was being programmed to remember that i was a Sydney Swan supporter, and would be able to introduce into a conversation with me, the fact you know how unlucky my team was to lose a final by by just one point. As a way of lightening the sort of enterprise. Well why didn’t this proceed? Well because the machine learning with a big data set, has to keep engaging with the big data sets to further refine, get better and better, as the automated car was supposed to get better and better and actually got worse as Jutta said, at not running over people who were pushing their wheelchair backwards. And the risk in of Nadia making errors about access or a person’s services was potentially devastating. And because it was so devastating Government, the agency, abandoned the scheme because the risk of you know blighting a person’s life in a major way was just not something that you could contemplate as an appropriate outcome. Which of course asks the question should we ever be thinking about having a chatbot, at least in the next decade or so in an area as sensitive as this one, for service delivery.

 

The second abandoned scheme, the one that may well come back, was one that really wasn’t about AI to start with. It just said that in place of deciding subjectively whether somebody has a sufficient functional loss as to qualify for the scheme, we’ll do as we do for the Australian disability support pension. We’ll have impairment tables, we’ll have numeric tables, things that turn your experience into a numeric, a score. And if you get a qualifying score, you get the service. If you fall below the threshold, then you miss out. But so that wasn’t AI really at all. That that was just because that was done 30 years ago, 25 years ago, in a human administration of disability services in Australia. And indeed, much of the world said you know, in the interest of uniformity, and fairness and equality, and all this sort of nonsense, we’re going to take human beings making subjective judgments out of the equation and replace it with some sort of quantified instrument. But the ADM part here was that those scores were going to generate 400 presumptive budget templates. So, i said you know, the human beings when the scheme was first introduced had about a dozen, or a couple of dozen of them that they’d sort of have sitting on their desk, that they’d look at and try to apply to somebody transitioning from the state scheme into the national scheme. This was going to have 400 of them. And that’s why it got to be called Robo planning, because of the automation the removal of human judgment from checking how adequate the plan package would be.

 

So, what was the concern about it that? That it would reduce rates of eligibility, that the packages would be smaller, and that there’d be much less individualisation if any. Now it was taken off the table by government in July. Many fear that there is so much momentum behind, and so much money already spent on consultants and others, that by this time next year it may be operational because the cynics would say it’s only been aborted because of the electoral pressure that was generated by the lobbying from the disability sector, in particular raising these sorts of issues. Look, there’ll be a written version of working paper later that people can get. The point about this slide is that what i’ve just been describing is not unique to Australia at all. We see that overseas, particularly the Medicaid example for people who don’t understand that those welfare type supports for disability services in the US, there’s a national control of the standard, the national control said we want better auditing of how the money is spent. But all the administration of that scheme is at state level, and one of the states applied a geo-positioning app to monitor how home care was delivered, and it was an utter and complete disaster. Leaving aside all the privacy and other incursions, it was as stupid as that. As soon as the caregiver walked outside the person’s home boundary, they became ineligible to be paid for the service that they were delivering, in that you know, a couple of hours of home care and support for the individual. So, you know, other parts of the world are dealing with this including province of Ontario. They’re in an employment services situation, the one there that you’ll see at the bottom of the panel. And what he Ontario employment services people did was basically ignore the app and the ADM all together. They’ve developed a whole series of workarounds to avoid the app, and the ADM from operating the way it was supposed to. So, look, I’ve talked too long. 

 

I guess my point is, because we don’t want to eliminate the outliers, that’s the whole point in disability services- is we need to be accommodating. The outliers, the diversity, the relational complexity, and the things that really only human beings are sufficiently skilled at. At the moment, taking on board disability services as a consequence, should be a last adopter not a poster boy first adopter, for ADM and chat bots or anything else. Instead, we should be looking at becoming a first mover on the unrealised potential of AI and you know, that is in all of those other systems that Jutta explained, the systems that we all use that are designed in ways that make it more difficult or impossible for people in the disability community to take advantage of. Those basic AI systems, in the way that the non-disabled community is currently able to do, and so you know my message would be that we should be a first mover by basically the NDIS or the Australian government contracting Jutta and her institute to do it the way she explained so eloquently in her presentation. Thank you very much, that’s me. 

 

Dr Lyndal Sleep:

Well thank you very much Terry, and that completes the presentations for today. Thank you so very much to all our presenters. 

SEE ALSO