EVENT DETAILS
Automated Societies Panel: What do we need to know?
2 August 2022
Speakers:
Dr Jenney Kennedy, RMIT University
Kate Bower, CHOICE
Dr Melissa Gregg, Intel
Penny Harrison, Australian Red Cross
Malavika Jayaram, Digital Asia Hub Professor
Anthony McCosker, Swinburne University of Technology
Watch the recording
Duration: 0:52:20
TRANSCRIPT
Dr Jenny Kennedy:
Before we begin, I would like to acknowledge the tradition custodians and their ancestors of the lands and waters across Australia, that many of you may have travelled from, to be here this evening. I especially acknowledge the people of the Woi Wurring and Boon Wurrung language groups of the eastern Kulin Nations, whose lands we are meeting on today, and who’s lands were stolen and never ceded. This event is hosted by the Australian Research Council Centre of Excellence for Automated Decision Making and Society. My names Jenny Kennedy. I’m a research fellow at the Centre and delighted to have the opportunity to chair this panel. So let me introduce you to our panellists.
We have Kate Bower. Kate is expanding Choice’s work on fair, safe, and just markets, to data misuse including price discrimination and algorithmic bias. Professor Anthony McCosker is from Swinburne University. He researchers the impacts and uses of social media and new communication technologies, with a focus on digital inclusion and participation, and data literacy. Penny Harrison is the director of volunteering at the Australian Red Cross and has over 20 years of experience in leadership and operational roles in the humanitarian sector. We have Malavika Jayaram who is the Executive Director of the Digital Asia Hub, and a faculty Associate at the Berkman Klein Centre for internet and society at Harvard University. And doctor Melissa Gregg is a senior principal engineer and research director at the Klein computing group at Intel. Melissa has an international reputation for her research in the area of technology, work and human factors.
So, I’ll begin with my first question to the panel: How is automated decision making already playing a role in peoples daily lives, and Penny would you mind if I ask you to begin?
Penny Harrison:
Not at all. And I’m delighted that everybody’s got their notebooks because I’m often the dag with a notebook, so apologies if I look down. If I think about where I’m predominantly working, which is the humanitarian sector, one of our greatest successes is actually working in a multi-sector way, and working in deep collaboration, and you might be thinking well why would this be relevant for the humanitarian sector? Well like everyone, the pace of all of these technologies, whether it’s artificial intelligence, whether it’s automated decision making, has a direct correlation to speed, pace and impact, in which humanitarian organisations can work. And in particular it actually offers us really important insights into how we can use data in ways that we have never been able to do before. And that ability in micro-seconds to understand patterns of movements of people, as you will all know, occurring in contexts in and around Ukraine, is just one example globally. The food insecurity that is occurring across West Africa – a lot of that information is now sourced in real time use of these technologies. And that leads to huge improvements, but it also leads to big questions, probably like all my colleagues on the table, we ask ourselves the same questions. Who gets to hold the data, who gets to make the decisions about how that information is then used and shared, which organisations get access and why, what’s the sorts of regulatory environments that we need? And as Red Cross, we’re often working in environments where there is very limited forms of governance, let alone regulatory environments. So the impetus and the onus rests on organisations operating in those spaces.
So, I’m going to perhaps maybe just kick off with an example. It’s pretty close to us. You’ll all be familiar with some of the impacts of the Black Summer bushfires. What you may not know, is that there is in fact a national facial recognition database that was initially developed in 2017, and in 2019 there was a halt put on that – and it’s actually a database that’s accessible by all states and territories even though it’s run federally through Home Affairs. And it enables a basic document verification service based on information that is held in that database. Now in 2019, there was a bit of a question that was well geez, how did people give consent around this information, what are the sorts of safe guards that we need to put in place? However, roll forward, and the Black Summer bushfires occurred and you all know the scale of that. And as one organisation, one of the biggest issues we faced was how do you quickly verify someone’s identity when they’ve just lost absolutely everything? How do we use these sorts of technologies? So we didn’t, but in fact Service Australia did. They used that verification data base to try to assist people to access relief payments very quickly. Now, I’m sure that had a lot of immediate benefit for those individuals who were able to be a part of that, but it does raise the question that we’re actually using something that we know is setting an environment that we know doesn’t have adequate regulation and legislation around it. And I guess this is why things like the world of Ed Santow up at the University of Technology Sydney, looking at those sort of frameworks around what this should and could look like become so important. When you put elements of human rights, or certainly community, pretty much front and centre. So, that kicks it off, thanks Jenny.
Dr Jenny Kennedy:
Thank you Penny. Malavika, can I ask what you’re seeing as your role at Digital Asia Hub.
Malavika Jayaram:
Thank you. Thank you for organising this and bringing this great group of people together. I think as someone who’s a visible minority, one of the things that the internet promised us was that we could be invisible. We could be anonymous, we could escape from the types of racism and discrimination that we experience in the offline world. But hey, the internet wasn’t just happy leaving us alone. So, new kinds of barriers, new kinds of discrimination, new axis and vectors around which this is being perpetuated online. So, I think for me, one of the really key areas of research as well as interest is how automated decision making affects particular demographics of people at large. And I think that’s where a lot of the work needs to be, because we’re dealing with not just instances of discrimination, but structural forms of discrimination that haven’t been dismantled, that are being perpetuated, amplified, by these new tools and techniques which are invisible or transparent and opaque to most people. They don’t know what’s happening so they can’t protest, they can’t engage in any kind of collective action against them. And I think it’s particularly problematic that this happens at a moment when collective action is so hard. People are exhausted, people don’t have the time, life is very complicated, and the old traditional forms of action – whether it was unions of other kinds of community actions, are on the wane. So, I think it’s particularly important, and I’ll just say one other thing which is that when there is a war on work, or a war on any kind of identity politics, as if you can say anything that reductive about people enforcing their own rights. It becomes really hard to actually fight for some of these benefits and obligations when everybody thinks you are playing politics and feels not to have any politics. In saying, it’s a game, it’s online, it’s fun. It’s social media, you know, don’t get so angry. So, we see automated decisions being made in spaces that ought to be free and fun and light, but they’re actually taking the social and turning it into the political. They’re taking the social and using it to make decisions about things that are really big. About your sexuality, about your entitlement to benefits, to welfare, to whether you can get a loan or not. And you know, I think we should stop using that example, because the impact of social media on loans is something we’ve talked about for a long time. But there are other ways we can actually describe new kinds of harms that are coming out. So, I think for me, the impact of these systems on our daily lives are really implicit and creepy in ways that affect particular cultures, particular demographics more than others. There is disparate impact on many levels. And I think for me that’s where the real work lies.
Dr Jenny Kennedy:
Would anyone else like to jump in at this stage?
Prof Anthony McCosker:
I’d like to add also, that we are really at a juncture. I mean, not necessarily a juncture, but there are a lot of crisis, social crisis, at the moment, and I think a couple of years ago when we sort of entered into the many covid lockdowns that we had in Melbourne here, we were working with community sector organisations, so the health services, social sector organisations that were themselves kind of falling apart in terms of how they managed to continue to operate. And we were trying to run data projects with them, looking at ways they could optimise what they do, or improve their outcomes or impact. And it hasn’t finished. Those kinds of issues and social problems are still there. I was talking to someone this morning about the massive wait lists for paediatricians, for diagnosis for kids for example. And these are things that technology can contribute to, can have some impact on in ways that we are all of course very cautious about. And thinking through the ramifications, the potential harms, but to balance that with there are so many crisis that need to be solved. There’s something to think about.
Dr Jenny Kennedy:
The next question to the panel. What are some of the social and ethical issues associated with automated decision making, in the contexts that you’re seeing? Can I start with you Kate?
Kate Bower:
I think – I’m obviously come at this from a consumer protection angle So, I kind of want to make two points. One is how digital products are exceptional, and the other is why we shouldn’t treat them as an exception. They may seem like contradictory points. And they’re exceptional for the reason that Malavika mentioned, which is in terms of the opacity. So, when we think about the history of consumer protection in Australia and the history of consumer advocacy at Choice. Often a lot of our work has been around product safety. And we do testing of products to ensure that they’re safe. But one of the things that we’ve known in the past is that people know when they’ve been harmed by a product. SO, if you purchase a heater and its safety device fails and it fails to shut off and it sets your house on fire, that’s an obvious harm to you. You purchased a product, it blows up in your face, you know you’ve been harmed by the product. So, what’s exceptional about digital products is that you sometimes don’t know when you’ve been harmed. Which is not to say that you haven’t been harmed, it just means that often the harm of the decisions that lead to the harm are many steps removed, or many people are involved, or many complex systems are involved in creating those harms. So that creates a problem. It’s a problem for consumer advocates, regulators, researchers, and it’s something we’ve all been talking about here, is how to get to that opacity. So, in that sense, digital products, and automated decision making is exceptional in that it’s different from other things.
But the way that I think it shouldn’t be an exception is that – you know, this panels called what do we need to know, automated societies: what do we need to know – and I think, let’s invert that to what do we not need to know as consumers. And that is, we shouldn’t have to be AI experts or privacy experts in order to interact in the digital world safely. That is not something that should be expected of us. If my toilet breaks, I can call a plumber. I don’t need to know how sewage works to expect a plumber to come and do their job properly. So I think that’s what we expect of products in the marketplace, it should do what it says on the tin, and it should do it without hurting people. And if you can’t do that – like we hear a lot of, oh the complexity, oh the black box, oh we don’t know how it works. If you can’t do that, don’t be in the market. It’s that simple. That’s not an unfair standard. That’s the same standard that we apply to all kinds of other types of products. And in that sense I don’t think that automated decision making, or digital products or products made in the digital space, whether they’re services or software as a service, or whatever we’re talking about, should be any different from any other type of marketplace. And the same safety and efficacy standards that we expect should apply.
Dr Melissa Gregg:
The other thing that made me think of, just as you were talking, is how even if there is one set of coordinates when a digital product enters a home, that may change over time. And I am reminded of the work that we did together on smart home technologies, and how depending on who sets up those technologies, there may be a certain power dynamic that only comes into play when things go wrong. So, when we were doing work on smart home technologies to try and inform Intel engineers on what to optimise for several years ago, this was a security-based industry for how to keep a home safe when largely men were not in the home and were off wanting to make sure they had locked down their property. And by bringing a gender studies lens to that we also started to realise that there was this growing number of partners of digital product owners who were becoming subject to new forms of threat or violence, potentially, because they didn’t know how to operate some of these smart products, because they weren’t initially installed by them. So, even if one person in a domestic context knows something about the harms at the point of purpose, as the relationship evolves, as homes take on many of the complicated characteristics, we’re often faced with a situation where a technology is shared, and it’s not just one user’s behaviour which is the issue. So, one thing that I’m often thinking about if we’re talking about the design of technology, is, if we’re only designing for individuals, we’re not necessarily thinking about the groups, the collectives, the families, or in fact the nations or migrant populations, or variations on collectives that also stand to be part of the harm involved, and have to be factored in.
Malavika Jayaram:
Can I also just add something to that, like one more exception to your list. Which is that, I think we often see harms as exceptions, as if the glitches, the problems are the case of a few bad apples and the rest of the barrel is fine. We fail to notice that actually sometimes the glitches are the system working exactly as intended, and the harms are very much calculated. There’s been a cost benefit analysis, and there’s been a decision made that we’re willing to live with these as the case of consumer products, but with most other things as well. It’s ok if there are false positives, false negatives, for this percentage, because for the rest of the people it will work. So, I think that’s really problematic, we’re ignoring the fact that it is the system working as intended. And I think the problem that also came up when I was listening to everyone, is you can’t treat life and people as an optimisation problem. You can optimise a service, you can optimise a product, but when we look at everything through the lens of optimisation, someone is losing out. Someone is losing out, some values are less important than others, and I really loathe balancing metaphors, like if I never heard another one again I’d be very happy, but when we look at everything like, oh we have to balance rights against innovation, or we have to balance privacy and security against the national interests – there are ways of balance that don’t involve people losing out. And there are ways to do it without thinking in terms of very mechanical, mathematical optimisation approach, that is actually people respecting, human rights respecting, and maybe it’s not as efficient and we’re willing to let go of some of those efficiency goals in order to have other benefits of democracy that isn’t perfect, that is creaky, that is messy, and it’s all going to look like Mad Max one day, and that’s ok, right. We don’t want a shiny sanitary universe where it’s this thing that science fiction promised us but actually turned out to be a really creepy dystopian state where you can’t read the books you wanted too. So, I’m just really against this sort of optimisation approach and I just really wanted to put that out there.
Kate Bower:
Can I maybe build on that. There’s this kind of idea, and I think we talked about it if anyone was at that fairness and bias workshop earlier, thinking about what are the technical solutions for these problems, versus thinking about what are the solutions, what are the outcomes? Virginia Eubanks has that idea of automating inequalities. You know, these things are not existing in a vacuum. Like the reason why bias and discrimination is a problem in tech is because it’s a problem in real life. This is not operating in some kind of vacuum. You know, the example that was spoken about today about Amazon’s biased hiring algorithm – in fact, the algorithm wasn’t biased, the algorithm was predicting exactly what was in the data, which was rampant gender discrimination in Amazon’s previous hiring. Like that’s not a biased algorithm, that’s real life discrimination. So, I think it’s important to think about what’s possibly technical, versus what’s actually already a problem in society, and we’re just automating what we’re already doing. So, in that sense, we can’t expect technology to save us from ourselves, right. We have to treat these problems as already the problems that we’re dealing with in society.
Penny Harrison:
That’s really interesting, if I could perhaps give an example of understanding where community sits and what the community needs are, and what the potential unintended impacts at the end of those chains as they get developed. There’s some pretty stunning – and I’m actually normally not a negative Nelly, but I feel like I’ve got to put them on the table, because I reckon we also then learn enormously from where profound mistakes are made to then understand what we fundamentally have to shift in how we design into the future. And I’ll try not to harp on it too much, but the war in Ukraine, there’s this fascinating case study around clear view AI, which I’m sure many people in the room have heard about. Now that technology has been deployed into a context which is highly volatile, where there is no border control, and it’s being used on the surface of it, to use this biometric technology to look at Russian soldiers, basically, who have died. But this technology actually, is very likely being applied in other ways, including to recognise other, so-called Russian individuals who may be infiltrators, and it raises enormous ethical questions. Because the scraping of the data that clear view AI has used it’s information source from is extremely biased and makes all sorts of assumptions about an individual coming from an ethnic background. Bow coming from a humanitarian context, and at the Red Cross where the fundamental tenants of the laws of war are being so profoundly challenged, like we’re asking ourselves the question what does this actually mean for the Geneva conventions, quite profoundly. But then on a really practical level, how do we begin to change this? So when we call out these examples, not there may not be enormous benefit in the application of the technology, but why are these things happening where they’re going wrong so profoundly. And we seem sometimes unable to speak up against them. Now this is certainly not the case in this particular example, but another one that had an unintended impact that probably simply was just not thought about, was a written up Human Rights watch report from 2021, based on the movement of the Rohingya refugees in Bangladesh, where again, biometric technology was used by the United Nations High Commission for Refugees for absolutely well intended purpose. The problem they needed to solve was how to quickly deploy cash to individuals who had moved in this mass movement, with all of the pressures and rights which they are afforded as refugees. But what hadn’t been factored, was then who was going to make that decision about how that information was used. And the smart card information which included the biometric data was then shared with the Myanmar government. Now again, under the refugee law, you have rights, but there was no consent points offered. So if I think about where are these points, where are these opportunities – because they are – this is a golden opportunity to be saying that was a pretty bad decision. But then who had the decision rights? Did someone ask who had those decision rights? And then how do we encounter that when obviously there’s a commercial imperative, there’s a human imperative wrapped around these case examples.
Malavika Jayaram:
And I think what also happens in those examples, is that so much of it is done for the benefit of someone, but it’s done in a top down, centralised way, where the people that these decisions affect are not involved in the decision making, they’re not participating – and I know Anthony will probably have a lot to say about ways to correct that – but I think in these contexts, there’s also an impetus to say, this is a disaster, it’s a crisis, we have one chance, one shot at going in there and collecting all the data we can because we can’t go there again. And the people doing the work are well-intentioned humanitarian actors saying, I’m going to collect everything that I can, someone in the IT department will tell me what I can’t use, some lawyer will scrape it and say you can’t touch that it’s illegal. Some IT person will someday automatically delete it so that it doesn’t hang around the world forever and ever, but I don’t have to make those decisions because I’m a doctor, I’m an aid worker, I don’t need to think about those things. And I think that’s a systemic problem as well, because we assume that there are silos, we don’t all need to know all of this – and to Kate’s point, we shouldn’t have too – but I think there are certain sectors where the benevolence doesn’t excuse the harms that occur, and we have to do better as communities to actually train people, saying if you’re doing data collection, have a RIB type process, academics do it, why can’t we transport something like that into other fields where you have to kind of exercise where you go through what could be collected, who it might impact, the ways to do it, how do you anonymise – I know anonymise is kind of the Santa Claus of the privacy world, like we tell kids exists but it doesn’t really, you can’t really anonymise, but you know, I think if you don’t even make the effort, or if you don’t acknowledge it’s a possibility and therefore you have to route around it, and you imagine that somehow you can achieve anonymity, you’re smoking crack, like it’s not going to happen. There’s no kids in this audience, right? Ok. I assume if you had drinks, you’re allowed to be in here and listen to this. But yeah.
We did a book sprint a while ago where we were trying to come up with a name for how humanitarian actors didn’t really work with data, or thought they didn’t. And we had one of those flow charts that said do you work with data? No? No, actually you do. And you come back to the same stream, like yes you work with data, like nobody told you but you do. And we were trying to come up with a name and we were thinking this whole anonymity problem, and the only IT security crazy hackler dude in the room said, well if you really want to protect your data, what you really want to do is just shoot your hardrive into space, short of that it’s just not going to happen. So that was actually the title of our book, which was shooting your hardrive into space and other ways to protect your data, which maybe three people downloaded but it exists somewhere in the interwebs – it’s there.
Dr Jenny Kennedy:
So, beyond shooting our hardrives into space, I would really like us to set aside some research money to at least test that. How can we ensure these technologies are being developed and used in more responsible, ethical and inclusive ways?
Prof Anthony McCosker:
Can I just say that this is the research aspect, and the evaluation aspect and I think in terms of harms, we don’t always know what those harms are going to be. That’s the reality of how those technologies get developed, and how they play out in practice. And you know, it’s kind of the issue that the European Union is dealing with in trying to categorise different types of AI for example, as high risk and low risk – you can try to do that, but it doesn’t necessarily match to the spectacular and the mundane. The mundane can be high risk, it can be high harm. Also those harms can be well down the track, and they can be subtle, they can be long term. So, the how to address this I think, has to start with research, and not just research in terms of how to build better models. But there’s lots of talk at the moment about how to start with the data pipeline right at the beginning so it is that point about, when the data is being collected, what is the process? Who’s involved, how are they trained, is there agency and ownership, is there a reciprocal relationship with the data subject or the person who’s data is being collected? And then, how can we map that right through the process through to the outcomes and what happens down the track.
Dr Mel Gregg:
I think one of the other aspects that’s worth considering on this topic is when the engineer doesn’t even really care where the data comes from. Like there’s not that kind of intimacy with the data set, that one would hope for, given the idea of a pipeline. You know, it’s an engineering metaphor that assumes that you know, you have visibility into that funnel. One of the challenges I think many professionals including engineers face, is a sense of time poverty that really you’re just focussing on the result that you’re optimising for – optimising again. And where you’re sourcing that data from, you know, it’s just a generic set that’s available to you. So you know, you’re going to try and make use of it to test this thing that you want to improve. And I think that’s one of the major problems that we have in introducing responsibility or even inclusiveness, is that the levels of abstraction that we’re talking about when we’re talking about training data, just makes that chain of custodianship of responsibility very difficult to trace. And the other point I thought was worth introducing at this stage in the conversation is you know, we don’t have a diverse industry. So, how can you possible get a diverse outcome if we’re talking about such a homogenous group of people that are empowered to run these technologies in the first place without a ton of training that would introduce things like different forms of equity awareness and also some of the more natant but to some parts of the world, very urgent topics, like what is the energy justice behind the kinds of tools and methodologies and training sets that are being normalised in the way the industry considers itself to be performing best practice. So, the thing that I wonder about when we’re talking about responsible automation is, what are we letting machines decide for us that don’t even require a human in the loop, that don’t even require a conversation about the effects of this professional practice that I am replicating. Because I have been taught that that is the best way to do it. And that level of abstraction is what concerns me the most because we’re very far removed from both the human impacts but also the ecological impacts of what these technologies are doing to the planet more broadly. That’s my happy story.
Penny Harrison:
Can I build on your point – I wrote down intimacy. How fascinating, because it just made me think of, for Red Cross, the restoring family links program, which matches people effectively, globally, if your displaced, impacted by war or other natural disasters. And the international committee of the red cross has been – and I was perhaps going to point back to you Anthony as well, but trying to safely design something. And it’s just given me a question I’m going to ask them, is, is it perhaps because there is an intimacy with how the data is originally being collected, which is almost individualised with that person, because the security and sensitivity of that information is so profound that we’ve got to have the engineers in the room, with the case workers, with the data scientists trying to understand how they’re going to manage these datasets in the background. And working through issues of consent, and then which bits of information would be – clearly not anonymised now – but how do you even go through the process. How do you safely learn? But I wonder if there’s something in this question of intimacy that we haven’t researched deeply? I don’t know, my fellow friends.
Prof Anthony McCosker:
But it definitely plays out when you’re talking about the application of different systems or technologies within organisation settings, for example. So, in the work that we’ve been doing with a whole range of non-profit sector organisations, large, small, across Australia – there’s a bit of a disparity between the resources that are required to actually bring different groups within the organisation together around data projects and their application, and building things. And that you know, desire to just grab things off the shelf, and to apply them as they’ve been trained and built for a completely different situation – most likely a commercial arrangement or you know, a customer service type arrangement that doesn’t fit to say a charity sector role. And if the resources are there and the will is there to actually put all of those pieces together and bring the silos together, it actually makes a big difference and I do think that there is a role for capability building for all of us. We’re not going to necessarily know how to get under the hood of a machine learning model, but the more each of us know the better. And in an organisational setting, then the more that you can contribute to building something that does actually put into practice those principles of justice and equity and consent and ethics, and all of those kinds of things that you would want to see. The problem is, is that it just doesn’t seem to be happening, mainly because of resources, a little bit because of the lag in the speed of development and need as well. As we said, there’s lots of crisis that need to be solved very quickly.
Dr Jenny Kennedy:
I just want to say as well to the audience, that we will have time for questions, and there will be two mics set up, so you might want to make your way to them when it is time. I’ll give you time to be thinking about those questions, but I want to ask one more to the panel, to each of you. What do those of us doing work in this space need to know, or what do we need to do better in order to learn more about what consumers need?
Dr Mel Gregg:
I’m going to start. Because that way we can go along and we can all be prepared. I’ve been thinking a lot about this question, since it’s the title of the panel. And given the comments I just made about the ecological impact of an automated society, part of it is, is there anything we don’t know at this point, about how bad some of these technologies are for the planet. And given what we know, given that information dumps on how bad the climate situation is, how bad extinction is, how bad the droughts are and the fact that Cloud companies are searching the earth for cooler places to put there servers, because we know already that there is a certain level of precarity upon us. It’s not that we need to know anything more when it comes to some of the challenges that the future of the tech industry looks like taking us closer to. So, instead I’d like to turn the question around a little and say like, what are you going to do, with the knowledge you already have. Because we all need to do more in our everyday settings to – I wouldn’t say just to generically resist what’s going on, because that’s not action, that’s paralysing in it’s own way. It’s instead, starting to think through literally, how is you having your laptop open right now draining a battery that came from a certain place, when we could all be using notebooks. Now, they have their own problems. But genuinely, thinking through how am I implicated on a daily basis with the very same ecology and ecosystem and economy that these technologies are reproducing and exacerbating? And the thing that I also wonder about it, some of the most egregious examples of automated decision making right now – I’m not sure if Ellie/s here tonight – but thing’s to do with Block chain directions, and the way that Crypto currencies and the metaverse and other kinds of web3 conversations are ahead, it’s just like an ecological disaster. But the more we keep repeating these as destiny as opposed to contested, we are also creating that reality. So, I’m partly trying to say, do what you can every day with the knowledge that you already have, instead of feeling paralysed. And I think part of what that means is getting much more acutely aware of those minute decisions that you make to get on a plane, or turn on your video, or not delete files when you’re actually contributing to the fortunes of those Cloud companies every time you neglect to take care of that digital housework. Isn’t that the term that you use? So, there’s some other thoughts that I’d like to make sure that we ponder.
Malavika Jayaram:
Yeah, I think for me. Picking up on what you said, Melissa, I think there is this sense that this kind of progress is inevitable, and that you can just add AI or blockchain or crypto to something and suddenly it becomes amazing and solves problems. But I think, we don’t ask the question of what’s the problem we’re trying to solve? Who is it for? Do they need it? Do they want it? Would they like to do it themselves instead of us doing it for them? Can it be a co-creative process? We don’t ask those questions and therefore we will never know unless we take a step back and let people solve their own problems, and ask for support when they need it. I think one of the other things that I’m really concerned about is the extent to which – I mean since I’m hating on everyone, let me also add funders to the mix. When all the motivation is for everyone to be working on automated decision making and AI because that’s what’s getting funded, what’s happening to all the problems we haven’t fixed? We haven’t fixed privacy and security but unless you view it through the lens of this will help AI, that’s not going to get funded. So all the grass roots work, all the work that’s incomplete, unfinished, and the energy of activists, civil society and academics, is being diverted to this new shiny thing, without having fixed the broken things that – forget them being valuable in their own right – are necessary and essential to solving the bigger AI shiny things also. They’re the structure and foundation you need. But instead we have the tech bros jumping around saying, lets have a digital Geneva convention, or lets have a Geneva convention for AI. Yeah, you have one, it’s called the universal declaration of human rights, thank you very much. We did that a while ago; it works really well. We don’t need 16 other guidelines are codes of practice and self regulation, and co-regulation, and we’ll do it ourselves maybe if we like, if you can find our terms and conditions maybe somewhere on our website. We’ve got instruments, and to Melissa’s point, let’s use the things we have, use the tools we have. Use strategic litigation. Put friction into the system. Use obfuscation, right. Use all the tools that we have because this unequal bargaining thing – like, sneak preview, it’s never going to get less unequal, if anything it’s going to get worse. So, I’m here of course to shift the Overton window of doom and gloom, and you know, what now becomes acceptable for how doom and gloomy we are on panels, but I think in terms of the diversion towards the shiny new things, funders are really complicit in this. All of the CFP’s are not solving what a management and energy and resources and communitarian things and you know, getting internet to people in the first place, because we think that was solved. We’re also not looking at post access. We’re not looking through the lens of we gave them the internet, did it work and what did they do with it? You have to use an after access lens to see why, even after the last mile problem was solved, why did people still not use it? Was it a literacy issue, was it something else? Was it something we didn’t even think of because we didn’t ask?
There’s a really great example that I’m really fond of – India introduced something called the rural employment guarantee act, so everyone was guaranteed a certain amount of work. When this was an offline process, every Friday the farmer would get all his labourers together and pay them cash for the amount of work that they had done. They then decided to link this to – surprise, surprise – a biometric ID program, and then said, every farmer has to walk for 6 hours whenever they need to go to the nearest bank, to actually deposit their wages, so why don’t we get those 6 hours back into their lives so they don’t have to do that. Let’s just send it to a bank account that we’ve helped them create because then we also get to tick the box on financial inclusion, right. And then what they found was that instead of people actually working more because they had those extra 6 hours, they found the data showed it dived horrendously.
Kate Bower:
We really need as many smart people as possible, solving these problems, because they are difficult.
Dr Jenny Kennedy:
Thanks. I’m very glad you’re all here too. I’m going to open it up to questions from the audience. So, if you have a question please raise your hand so that the mic can find you. There’s one up the back there.
Participant 1:
Hi. We’re still only getting graps to whour own physical consent in lots of places, and that gets under attack as we see in America everyday. How do we educate people to understand the virtual consent at the same time. So, if my medical history is being used for good or for bad in an automated system, how do we teach people that you have the right to consent over your virtual self? And your virtual information?
Kate Bower:
…Data collection as part of their model. So, they’re using all sorts of other Ai, but they made the point that – and I know I said I wasn’t going to talk about facial recognition, but it turns out I am – you know, in and of itself some of these technologies can be harmful on their own. Sometimes they’re not. What is concerning if we don’t put the brakes on, is where we might be heading if we don’t think about these things. So, like the metaverse, it’s an inevitable thing. Like this future where you walk into the supermarket, it identifies you, it knows where you are. It knows behaviourally what kind of things appeal to you. It doesn’t display prices, there’s no unit pricing. Instead, what you get is either a personalised price list based either on your willingness to pay, or the kinds of things that you’d like to buy. So, this kind of world where all of the mechanisms that we would normally have, where transparency, and to increase consumers bargaining power and to increase consumer choice, and to enable people to be able to make those decisions transparently and fairly, are gone. So, kind of what’s at stake is automating inequality to the point where it’s beyond help. And that’s a very kind of doom and gloom way of thinking about it. But as I said, none of this exists in a vacuum, this is literally replicating the bias and replicating the discrimination that we already have. I think if we don’t treat this seriously, we will automate it to the point where we’ve lost human control, we’ve lost our capacity to intervene in that system. I think this idea of – I’ve got a quick personal anecdote – we’ve got time don’t we? My mother when I was growing up was the national manager of passengers and intelligence at customs, which is now the Australian Border Force. And was responsible for the first implementation of facial recognition in Australia which is smart gate. And the conversations that they were having at that time, and the argument that she was making was that – they were talking about should they use finger prints, should they use facial recognition which at that time was a very new technology – was well it only needs to be as good as a human customs officer, right? So it was like, what if it gets it wrong, you know. You can’t trust this machine to do this thing. Well, it only needs to be as good as a human, and I think that’s an interesting thing. Like does technology only need to be as good as a human for us to trust it to make a decision, or does it need to be better? So, I think my actual argument is that it needs to be better. There’s something about humans and the way that we make decisions – you know, if you think about autonomous vehicles, you know, yes we can give it a whole bunch of conditions and a whole bunch of principles and it will operate on prediction and within certain parameters. But me as a person, I’m a person who’s been in car accidents, I’m a person who’s had a family member die in a car accident, if I’m going to be in that trolley problem, and I have to look someone in the eyes when I’m going to kill them, essentially, you know with the machine that I’m controlling – that’s a very different affective experience in making that decision, and I don’t think we’re anywhere near entrusting automation and machines to make those kinds of decisions. So, I guess what’s at stake, is we give up the human-ness, we give up the guilt and the ick factor and all the weird stuff that we do in terms of making decision, and hand it over to these machines. But these machines – to borrow from Ellen Broad – are making decisions by humans. We’ve already made these decisions. We’ve just made them and then let them be. So it’s like we’ve made the decisions instead of making them every time, we’ve like made them at one point over here and now we’ve just let them go into the world and now we’ll just let them be whatever they will be. So, yeah, what’s at stake is a horrible dystopian vision. I hope we don’t get this right. Over to you to add some hope.
Prof Anthony McCosker:
That’s a tough one. Well, I don’t know, I can’t – I’m not a speculatist, I’m not a futurist. I’m not going to even try, but someone did ask me – peak body social services person asked me you know, what are we looking at in the next ten years, they’ve got to be sort of prepared for how they use data in and across social services. What should we pick, what winners should we pick, what ones should we put aside and what should we be advocating for, and it’s such a really difficult question. And the only answer that I could really think of was that well, start at the ground up and start at all of the processes you need to put into place to make sure that your doing things better than you are now, and look for the outcomes that are improvements. Then I was thinking about this question of where are things headed, and the thing that really does seem to stand out across all the things that we’ve talked about at this symposium, and all of the projects that we’re doing, is that there’s something about expertise and learning that’s changing and really shifting across all of the contexts where we see these kind of technologies deployed or used. So, if you take for instance, the big language models that are actually changing our media ecosystems in terms of instantly produced news articles for example or deepfakes, synthetic media, producing new things out of nothing. I was talking to a young artist in the foyer out there about the new kind of synthetic media art play toys that are built on the back of Daley and so on. What’s really interesting is that these things are changing our sense of expertise in terms of who is the author, who is the writer. You know, what role do we play in terms of putting together that article that’s auto produced. And what role should we play? And is it something that we need to rethink, all of those roles that we’ve been so comfortable with in terms of our agency and our input into these processes. And the other side of that, or the flip side of that is some of the success cases that are always referred to are things like diagnostics in medicine and you know, image reading of melanoma. And the test is not a Turing test anymore in a sense that, is the machine convincing enough to be a human, it’s is it better than the expert? Is it better than the expert reader of the medical image. I think that kind of goes to your point there as well, and I think it’s interesting to think what happens when it is, and where do we take that and what do we do with that? And in what situations or kinds of contexts can we imagine, that we could do things better than we are at the moment? I don’t know, they’re some of the things that I’ve been thinking about.
Penny Harrison:
I might add two things. One is we – and perhaps to your human point there – I think we need to not loose the capacity to be shocked. Because sometimes we can become very complacent and we forget the unintended consequences and all the other kind of perverse outcomes that can occur. And there’s something about the humanness in us that we must remain extremely curious but also to have this ability to be shocked. So, the role of autonomous weapons systems for example, which has played out in the last couple of decades of conflict, have horrendous outcomes for civilians – protected under the Geneva conventions, I might say. But again, it’s back to what are the ethical considerations and questions we really need to ask about the limits of the humanness of decisions, and how far are we willing to go to enable these weapons systems, in this example that I’m using, to take on the responsibility that really probably should lie, ethically, with a human being.