PODCAST DETAILS

Bots as More Than Human
18 October 2022
Speakers:
Prof Daniel Angus, QUT
Dominique Carlon, QUT
Listen on Anchor
Duration: 21:20

TRANSCRIPT

Prof Daniel Angus:
Welcome to the Automated Decision-Making and society podcast. I’m Professor Daniel Angus and in this episode I’ll be chatting to Dominique Carlin about bots. Dominique Carlin is a PhD candidate at the Centre. Dom has a background in criminology, history and law and is interested in the dynamics of online communities and emerging digital cultures. Dom’s current research focuses on the everyday role of bots in platform environments and explores how people create, engage and interact with bots in playful and creative ways. Thank you for joining me today, Dom.

Dominique Carlon:
Thanks, Dan. It’s great to be here.

Prof Daniel Angus:
So, Dom, could you tell us a little bit more about your current research? You wrote an essay that won our ADM+S Essay competition that talked about the more than human role of bots. Could you just unpack that a little bit?

Dominique Carlon:
Sure so, I really researched the cultural role and the cultural positioning of bots in online communities and I’m particularly interested in, you know, a type of bot or automated agent that might be considered novel and by this I mean bots that are making jokes and creating poetry that are combining text and playing with memes and art and other forms of expression.

And what I find like particularly intriguing about these types of bots is, first of all, they’re not pretending to be anything but bots. They’re proud bots, and they have a persona as being a bot. But it’s also that they aren’t necessarily important or functional in the sense that they’re not out there performing, you know, laborious tasks like a commercial chat bot that talks with customers 24 hours of the day and you know, is called Melissa and takes on this persona as being a 30 something year old customer support officer.

Prof Daniel Angus:
Yeah, I guess that and that’s one area I wanted to dig into a little bit today was that I imagine a lot of people when we talk about bots, have an imaginary or indeed that they’ve used bots like that. These are bots that have taken on perhaps what was formally a very human role. So like customer service, we see this in the intelligent virtual agents that are in our homes so the Siries and others, we’ll be careful not to use those words too much I guess, if it triggers actual devices in people’s places as they’re listening to this. But the idea of, yeah, the everyday kind of experience of those and they often would you agree that they take on very, you know, human roles, right. And uh, very they’re positioned as kind of human in the way that they interact with that? Or would I be right saying that?

Dominique Carlon:
Yes, definitely. I think particularly in the commercial environment where you’re seeing these chat bots that are sort of designed to represent or even to deceive customers into thinking that they’re talking with an actual human and in reality, they’re talking to our bots that that is pretending to be human.

Prof Daniel Angus:
I remember this when Google Duplex came out. So, Google Duplex was a platform where it was able to make phone calls on behalf of the Google customer to place reservations for dinner, to get a haircut and these kinds of tasks and there was a lot of pushback because the people on the other end of the phone call which this Googlebot had made the call to make the appointment, weren’t made aware that that they were talking with a bot. So, this is a a bot that calls up, makes an appointment, has a bit of chit chat, and then hangs up and those people weren’t made aware or disclose.

Do you think there’s a need for those kinds of bots when they’re taking on a a very human like role in that way to kind of self-disclose and make people aware that they are a bot in fact?

Dominique Carlon:
Yes, and I don’t think it’s necessarily that the bot needs to say you are speaking to a bot, it’s just that I really questioned the need for most chat bots to be taking on this name and to take on this persona of a human. And often those roles are highly feminised, they’re highly gendered and you know the example of Melissa as a chat bot, like it’s really a question of, you know, why is this bot not called, you know, Hero chat bot and if it performs all of the tasks and all the expectations of that bot in that context, like it doesn’t really need to be ascribed to human characteristic.

Prof Daniel Angus:
So, you describe a lack of imagination in your essay around that we kind of latch onto these existing tropes, right? You the case of the highly feminist chat bot is 1 good example members of our Centre have written, you know, extensively around that and that idea of it’s almost like a lazy way in which you can quickly get a bot up and running is to borrow from those existing imaginaries around you know that the hired help or the customer service agent or something like that, rather than having to build perhaps a bot from the ground up that is entirely new, interesting, different in some way and this is where I think about your research with like the dad bots and others that exist on the Reddit platform and I wanted to ask you about that. So, can you talk us through a little bit around what you’re finding out around these other kinds of more, I guess, whimsical artistic bots where communities are building these from kind of the ground?

Dominique Carlon:
Yeah, sure. So, on the first point there, like, I think that bots could take any form of persona, so they could be, you know, inspired by an animal or a hero or a fictional character. And that’s in that way they can sort of perform a role that is perhaps a little bit more fun and engaging and it’s being a bit more true to what it actually is. So my current sort of research is like, I’m really interested in these novelty bots and. Even though they’re not performing these really important tasks and they’re also not making the currently harmful, they’re sort of not attracting the type of attention out there because of that. But despite this, they’re playing very active roles in these communities. So, I’m currently looking at the life cycles of a number of these type of bots on the Reddit platform. And one of these bots that I’m looking at is called Dad bot. Or Dad bots in plural, I should probably say because there’s almost 100 of them out there on Reddit.

Prof Daniel Angus:
I love this, I’m a dad right and I love a good dad joke right. So I and I know these dad bots and and and can you just step out the kind of style of interaction we’re talking about. With the dad bot on Reddit.

Dominique Carlon:
So, Dad bot, it makes dad jokes. So it’s really based on this sort of persona and image of that daggy middle age sitcom style Dad who plays around with puns and makes everyone cringe, but also laugh.

Prof Daniel Angus:
So it’s a kind of thing of like someone in what say the comments saying I’m hungry in it coming on and say hi hungry I’m dad bot, right? Or something like that.

Dominique Carlon:
Yeah, that’s the very typical dad bot response. So a user might say something like hi, I’m not sure about that and then Dad bot comes in and detects the I’m part and says “Hi I’m not sure about that,  I’m dad” and just insert a little smiley comment.

Prof Daniel Angus:
I love it.

Dominique Carlon:
It sometimes it also makes jokes like funny, funnier jokes like what should a lawyer always wear to court? A good law suit.

Prof Daniel Angus:
Oh my God, that’s that’s perfect. So in that sense, right? Like, I guess that’s a very, you know, benign form of humor, right? It’s it’s not and and it’s very safe form of humor in many ways. Do you find with that that then there are those that try and push them in particular directions and where that humor might start to cross boundaries. Is it always welcome? Where does it kind of go wrong, right?

Dominique Carlon:
Yeah, so it’s not always welcome. I found around 400,000 comments by Dad Bot and yes and and Dad bots, they don’t have a sense for context. So they aren’t just responding to comments in specific subreddits that might be more lighthearted. They sometimes go into serious conversations, and sometimes I would say it is welcomed in those conversations. For instance, if there is this real sort of political debate that might be starting to get quite nasty or quite toxic sometimes dad bot just comes in and says hi, I’m not stupid, I’m dad. And it just really breaks it off.

Prof Daniel Angus:
Yeah, yeah. So it can be a circuit breaker in a way, right, in an otherwise, yeah, toxic conversation.

And that’s interesting. But I guess it could derail at the same time, right? And where people are starting to kind of make headway or are being, you know, threshing something out, it could be unwelcome because it’s perhaps making light of a situation that people need to sit with or or treat a bit more seriously.

Dominique Carlon:
Yeah, yeah, yes, very much so. And that sort of comes down to dad bot, not understanding context. So sometimes it comments in conversations where Reddit users are talking about the death of their dad or about an assault. You know, family incest, for instance. Where it’s just not the place for a dad joke to be popping up there. And then you get a commentary about Dad bot this is not the time, this is not the place. Or maybe that the bot should be removed or banned from those specific subreddits.

Prof Daniel Angus:
Yeah, yeah, yeah. Do you think it’s reasonable? I mean, given we’re thinking about this is, you know, more than human and the role that they play that within human context like we get, we get stuff similarly wrong, right, when we’re having conversations, people putting things outside of context or whether you know comments where they’re not welcome antagonistic comments, do you feel that sometimes we set the bar higher for a bot right that it needs to meet and go beyond a certain threshold in terms of its performance expectation within these conversations?

Dominique Carlon:
Yes, little a little bit and. And I also think it’s very interesting to look at how Reddit users go about sort of navigating and negotiating what is expected of a bot. There’s sort of this idea that bots just can be funny and they can be simple and they can be like silly. At the same time, though, there’s a sense that bots, you know, should be better behaved. So there’s always this sort of negotiation, and sometimes in very interesting and unexpected ways, that happens on Reddit.

Prof Daniel Angus:
And talking about chat bots, of course. I my mind goes back to thinking around some luminaries in the field, people like Alan Turing. So, you know, everyone has probably heard of the Turing test, the idea that can we build machines that can fool humans into believing that they’re conversing with other humans rather than other bot. But then, of course, people like Joseph Weizenbaum. Who created one of the very first bots – ELIZA. And in thinking about those kinds of bots, I often have a bit concerned that our thinking around bots has always been a bit limited, that we’ve always been limited to thinking around bolts as an extension of human that they have to kind of fit this role and be the ideal human. I’m interested to hear your thoughts Dom around that and where you see some of your research, and particularly from your essay, where do we go beyond this idea of human?

Dominique Carlon:
So, I would suggest that there has perhaps been a real focus on developing bots that are indistinguishable from human, and this has perhaps been a little bit limiting in terms of the extent of what bots can do. So you know the Turing test,and the Bot or Not and trying to create a bot that you really can’t tell whether it’s a bot or if it’s a human that’s speaking in terms of the type of language it’s using and the type of response is that it makes. And I think that that is sort of leant to this idea of bots becoming, you know the development of an AI that is this hyper intelligent bots and this sort of fixation on the value and importance of developing this and I think that has sort of led to this idea that you know, bots then embody a lot more than what they are. They’re still a bot and they have been created by humans and humans have decided that that’s the direction they want bots to take.

They want bots to be as lifelike as human like as possible. Whereas I think that having had a look at some of these bots on Reddit and the extent and variety of these bots that are really sort of playing in the area of entertainment and humor and poetry. So you know, those bots that speak in Shakespearean language and bots that, you know, talk like Yoda. Various other sort of Star Wars type of bots. You know, whilst these might seem really sort of insignificant and silly type of creations, people who actually have a lot of fun in creating these bots and they’re not trying to be anything else, they’re not trying to be particularly complex or smart, but what they’re representing is this purpose that people are using them to practice coding to learn how to make a bot. They’re playing with it, they’re having fun with it, and they’re looking at, you know, creating this bot that is, you know, artistic and funny and cool and humorous and that people actually enjoy engaging with and I think that that sort of side of the idea of creating bots has often been sort of dismissed as insignificant compared to this hyper intelligent human like chat bot.

Prof Daniel Angus:
Would you always also say, I mean, I’m was going to ask you a question about, you know, potential benefits to society, but you kind of already cashed that out in your examples showing that creative capacity, right? Humans, you know, why can’t the benefit just be that we have these tools and we play around with them and that we, you know, become creative and build these spaces and exert, you know, that you know, yeah, be playful, right that that is a good and of itself when a lot of the oxygen I feel in the room is sometimes sucked up by the bad bots. Right when we talk about bots in popular culture, people often think about disruptive bots right online. These are things that spread, say, rumors or falsehoods or that, and so a lot of the discussion is often very negative. It’s often quite protective the idea of, you know, banning bots from certain platforms or such, even bots used now as a slur against others if they feel that they are, you know, towing a particular political line and and so, you know, do you find that that we perhaps need to hit the pause button for a second and come back and think well, how can we be creative again? Can we use this technology in a way that is generative, is playful, is whimsical and that there is inherent value to that, I mean is there is there more to it than that? Do you think there are other benefits for this kind of, you know, for bots of of this kind of sort?

Dominique Carlon:
Yes, so I don’t think it’s necessarily you know one or the other. I think there’s always going to be badly behaved bots and bots designed for malicious purposes. So bots are sort of showing, you know, a false sense of support for an ideology or political figure or spreading disinformation. I think they will always be around and there always needs to be that research on detecting those bots and understanding how they’re operating and what information they’re sharing. But at the same time, I think it’s important not to. Uhm, breach that across to generalise bots as being this one thing, this one thing that’s either this, you know, very useful customer service thing, or this very damaging and dangerous thing, when the reality is that the type of bots I’m talking about on Reddit, these are very everyday bots, these are everyday people who are creating these bonds. They’re not inherently harmful, and there’s value in the play of these bots in learning the skills of these bots and playing around with them, and also the fact that they, like you, spoke a little bit about the disruptive behavior of bot but I would also suggest that bots can be disruptive in a positive way. In the sense of breaking up. harmful conversations. So some bots are deliberately designed to do this. So for instance, on Twitter there’s the Doom Scrolling Bot, Reddit has the Cool Down Bot, which I might talk about later, and a few others. But I would suggest that the encounter upon these bots is perhaps a little bit different than a spontaneous encounter with a bot that was unexpected. So, a bot that sort of tells you to cool down or relax or go take a drink of water, get some fresh air, stop scrolling, might be sometimes received in a not as positive light, so they might be seen as perhaps a bit patronizing or, you know, I don’t want to see you now. Whereas if someone just comes across a bot that’s making a dad joke or a poem, you know, sometimes it’s a it’s a nice sort of breath of fresh air amongst like a lot of heated discussion or a lot of negativity online. So I think there’s real positive roles for bots, not just from play, but also in terms of disrupting negative behavior.

Prof Daniel Angus:
Yeah, the image that came to my mind was Microsoft Clippy, right. Which, you know, it was the kind of largely doomed nag bot, right as they’re called these you know bots that appear right and even the word nag bot right because there are other examples I imagine you would you would consider them examples also of these kinds of nag bots where they’re trying to monitor your behavior and see if you’ve entered a particular state of work or, you know, doom scrolling as you say, and then come in and interrupt that that process, that, that flow. Can you talk a little bit more about that cool down bot? What is and how does it work?

Dominique Carlon:
Sure, so cool down Bot is a Reddit user-generated bot and it largely sort of detects swear words. So it will count the amount of swear words that was in Reddit users comments and it will say, you know, you might want to just take a break, have a glass of water cooled down, don’t take things so seriously. But the Bot hasn’t been received very well on Reddit.

Prof Daniel Angus:
If I can take a guess, and am I guessing this right, that with the bot that’s triggered off the number of swear words was the instant reaction then of Reddit users to then see how many swear words they could put in a comment and how quickly they could, you know, get cool bot to come in and and try and tone police them? Is that correct?

Dominique Carlon:
Exactly, but not just that, people then created bots that started swearing.

Prof Daniel Angus:
It’s very Internet behavior isn’t it right. And yeah the almost guaranteed when you tell someone not to do something that they’re just going to do it or find ways to do it and amplify it that’s that’s amazing.

We have to wrap it up there I’m sorry Dom but is there anything else around bots and just kind of thinking as we close out on what’s been an absolutely fascinating conversation about where you see this, this, this research heading and where do you see the role for bots coming in the future? We’ve got these amazing new technologies like language generation technologies, GPT 3 and these others. You know, platforms as well that where users are playing around more and more with this. Do you see this as a kind of a burgeoning space that we’re gonna find this almost Cambrian explosion of bots and it’s gonna be quite interesting into the near term.

Dominique Carlon:
Yes, I think there is a potential danger of some of these bots disappearing and what I think is important is that these bots on Reddit that I’m researching they’re user generated bots, so they created by Reddit users, not by Reddit itself. And I think there’s this real role for that user generated playing with things like bots and those type of technologies and I think it’s important in the long run, that chat bot development doesn’t become, you know, limited and restricted to bots that are created by, you know, the large tech and commercial companies and that the everyday user of the Internet can play around with these type of tools.

Prof Daniel Angus:
That’s fantastic and what a great way to wrap. Thank you so much John for coming in.

Dominique Carlon:
Thank you, Dan. It’s been fabulous.

Prof Daniel Angus:
You’ve been listening to a podcast from the ARC Center of Excellence for Automated Decision-Making and Society. For more information, go to the Center on admscentre.org.au

END OF RECORDING

SEE ALSO