EVENT DETAILS

News and Media Symposium – Search Engines and Recommendation Systems
6 October 2021

Speakers:
Dr Kylie Pappalardo,, QUT node, ADM+S (chair)
Louisa Bartolo, QUT node, ADM+S
Dr Jeffrey Chan, RMIT node, ADM+S
Prof Mark Sanderson,  RMIT node, ADM+S
Watch the recording
Duration: 0:59:36

TRANSCRIPT

Dr Kylie Pappalardo:

Okay. Hi everybody. For those who don’t know me, my name is Kylie Papillardo. I am a senior lecturer in the law school here at QUT, chief investigator in the digital media research centre, and an ARC early career research fellow. I have the pleasure of moderating today’s session on search engines and recommender systems. So, our purpose today is to consider the social and ethical issues that arise with the use of search engines and recommender systems, and in particular, to look at how can and should platforms take responsibility for how their recommender and ranking systems display privilege and amplify certain messages.

So, we’re going to run a similar format to the previous session. We have four panellists who will each present for five minutes each and then we will have a Q and A session. So, what I will do actually is introduce our two panellists from RMIT first, who are joining us on zoom. You can read all of our panellists full bios at the back of the program and if you’ve lost the program or the code for Slido, it’s on the back of your little thing here. And you can see everything they’ve done at the back there, but I’ll just give you a very brief bio for each speaker, and then we can get straight into it. So, I’ll introduce as I said, Jeffrey and Mark first, who are joining us online. So, Jeffrey Chan is a senior lecturer at RMIT University and an associate investigator with the ADM+S. His research interests lie in machine learning, recommendation fairness, accountability, and transparency – also known as fact for short social network analysis data-driven optimisation and decision-making, and the interdisciplinary research that combines these fields to solve novel applications.

Also joining us online is Mark Sanderson who is a professor of information retrieval at RMIT and the head of the RMIT information retrieval group. He is also a chief investigator with the ADM+S and his research is in the areas of search engines, recommender systems, user data and text analytics. So, Mark and Jeff are our computer science experts on the panel today. I’m going to hand over to them first to do their talks, to really just give us an introduction into recommender systems and search engines, and some of the key issues and challenges that are facing us today. So, I’m going to hand over to Jeff first.

Dr Jeffrey Chan:

Okay, thanks a lot Kylie. Can everyone hear me? Yeah, great, thank you. So, thank you for the opportunity to take a few minutes of the time, to basically just give a high level introduction about how recommender system works, and then briefly introduce some of the stuff that I’ve been doing within the centre. So, I guess you know, all my interest is in recommended systems, and what the recommended system typically do is take users interests and intents and then try to suggest interesting and relevant content to them, as I think a lot of you know. And how it does that is in two ways, I guess. So, one way is that it tries to personalise the suggestions like based on the user’s interactions with the platform. So, it could be Youtube, could be Amazon, Netflix, whatever your favourite media, and use recommender systems, right. And it does this in a number of ways. So, fire direct signals from ratings, for example if you rated a video that you’re watching on Youtube, then that’s a direct signal about whether you like that or not. And also indirect signals, like for example how long you’ve been watching a Youtube video. How long you’ve been drawing on say, a web page on Amazon. What you might have searched or clued, in say on Amazon, et cetera. So, these are like indirect signals about your interests and these are taken, so this information about yourself as a user is used as a way to try to understand what’s your interests, right. So, these are again combined with I guess, personalised information and profiling of other users in the system, and the way that a typical recommended system would use this would be for example if you know, a typical user watches action movies, romance – although that’s probably not the best example – and etc, right. And if you guys would watch, say, action movies right, and then they say look a lot of the other users that have similar profiles to you seem to also like romance movies, so I’m going to suggest a romance movie to you, right. And of course this – as you know as part of centre – does lead to a lot of issues about the majority, do you know for example, watch this romance movie, but you yourself don’t actually like that, right. And so that will lead to issues about you know, diversity of content recommended, and also things like, for example, the less popular video creators on Youtube or the music producers on Spotify, and these type of producers or content producers won’t necessarily have as much exposure as they should, right.

So, I guess that leads to my interest right, in this centre, is these kind of multi-stakeholder recommended systems where we have users, we have the platform themselves, we have content producers, and you know, each of them have a different utility they want to maximise, right. And the trade-off between these utilities typically would mean that it would be impossible to optimise all them simultaneously, right. So, we have mathematical and decision-making tools, righteousness science tools, to specify what might be the best way to go about this. But in terms of the actual trade-off right, that’s more to do with societal impacts, more to do with a community kind of information, right. And so, that part of work you know, points to – it’s not just a technical or algorithmic solution, but we actually need you know, collaborations with social scientists, legal experts, etc, to kind of encode that information into the recommended systems. And the other thing I want to quickly mention is, this cares project, consider it. And to this project that Mark and I are part of, that we also are interested in trying to, I guess, represents the third parties that aren’t typically represented in a system. And so, just take – give an example of a GPS system. You have to use this, you get routed different down-to-paths, you can say Google maps, who does that, but you also have third parties like the local residents who not typically captured in the system. And you know, we’re very interested in trying to capture that information and incorporate it into the recommended systems of the future. So, thank you.

Dr Kylie Pappalardo:

Thanks Jeff, I’m looking forward to unpacking that a bit more in the Q and A. But we might hand over to Mark now to talk a little bit about search engines work.

Prof Mark Sanderson:

Thanks Jeffrey, for that fantastic summary. The curious thing about search and recommendation is actually a lot of the technology between them is very similar, but the search engines have a big advantage, which is that you come to them with a query and you come to them knowing you want something, and you can express that as a query. You would come to a search engine – unless you knew how to express the query in the first place – so the search engines get a big advantage that they get a very clear expression of what somebody wants to try and find out.

What do they do? You know, basically they find documents that have those words in them, but that’s not good enough. That’s what library systems from 30 years ago used to do. But if you tried that on Google you would just be overwhelmed with millions of documents that match your query. So, what the search engines like Google and Bing have to do is, you have to sort the results and they have to sort them in a prioritisation. So, some of the tricks that they use, they look for, do the words that you’ve typed in appear close to each other? Are they repeated multiple times inside the document? Do they appear in particular parts of the document, like the title? A very good trick that actually google used was if you’ve got a page which you’re searching on but you have a page over here that links to this page, they look at the anchor text, they look at that blue text that links to this page, and they use what others say about your page as an important clue. So, do the keywords that you’ve typed in match that blue anchor text, you know? They also look at things like the links. Is this from an authoritative source? And I don’t know, Google will boast they have maybe 300 features that they put into this sorting algorithm and they sort the documents and try to get the best ones for you. How do they know that’s correct? They get raters, humans, to assess thousands of queries on a pretty regular basis, to look at the documents that come back. They have a very large manual explaining to those raters. What sort of things they want them to do when they’re rating those documents, and then they try to find an algorithm that gets as many of the documents that the raters liked near the top. They do also look at click signals, so they look at the way that users interact with the search systems, but they seem to use a mixture of strategies there, particularly for things like medical queries perhaps, financial queries. They tend to focus more on using the relevance assessments from the writers that they’ve hired. If you’re searching for – I don’t know a game, mah-jong – then they might just kind of take you to the website that people click on more often. So, they use a variety of signals depending on the kinds of searches that people are issuing. They’re pretty good at it.

The other thing that they’re very good at, by the way, might be asking yourself if Google is so good, how about my university’s website has such a crappy search engine. Well, how come email search is so terrible on whichever system I’m using. And, it’s because the search engines also exploit redundancy. There’s vast redundancy on the web. So, when you type in your query and you think you’re getting all the relevant results, you’re not. You’re just getting the results that match your words on your email collection. There’s perhaps only one email that answers your question and so then finding the words that exactly match that email is extremely hard. On the web you get to exploit all of this redundancy. If you’re searching for, I don’t know, a company that does taxis to Melbourne airport, if you miss the websites that talk about a cab to Melbourne airport, it doesn’t matter right? Because there’s enough taxi websites anyway. But in things like enterprises and in things like email search, you don’t have that redundancy to exploit things that could be made better within search engines.

The thing that interests me a lot is I’m kind of an evaluation nerd and I find evaluation very interesting, and evaluation is at present, particularly in the academic world, it’s quite productive. If you do a Google search for the four letters N D C G – normalised discounted cumulative gain – you’ll see the formula that most academics use to decide whether a search engine is any good. Basically it looks at the way that the list has been sorted. It compares it to the list that a bunch of experts would ideally find, and it makes a comparison between those sorted lists. And it’s basically a way of measuring your happiness, your satisfaction, with the search engine. But it’s a very simple formula. There’s a couple of divisions in i. There’s a summation, there’s a log just to make you a bit nervous, to make a bit nervous but it’s actually very simple, and it’s an interesting question as to whether those sorts of mathematical formulas really capture the way that people feel. And of course, for us the question is how do you do something better? Which is an interesting challenge because you’ve got to deal with that scale, right. I mean people like Google, I told you a minute ago they test thousands of queries, so you want to be able to assess very large amounts of content against different iterations of a machine learned algorithm. But you also want to try and capture something more than these very simplified formulas. Look, and that’s something that that we spend time thinking about at RMIT. So, hopefully that gives you a quick summary of what search engines are like.

Dr Kylie Pappalardo:

So, our next two speakers here on the podium with me, are from the humanities and so will be telling us about two projects. So, one that they are each involved in that look at recommender systems or search engines within a particular area of the humanities. So, Louisa Bartolo sitting next to me is a PhD candidate within the Digital Media Research Centre. Her research centres on questions around platform governance with a particular focus on digital platforms recommender systems, and Louisa will tell us a little bit more about that in a second.

And our last speaker, professor Patrik Wikstrom, you’ve met already. He is the director of the digital media research centre, a professor of media and communication, and a computational social scientist. His research focuses on developing data science tools and methods to analyse the production and consumption of digital cultural products, particularly music. So, I’ll hand over to Louisa to start.

Louisa Bartolo:

Thank you. So, yes, as Kylie said, I’ll talk a bit about the projects I’m working on in my PhD. And they link really nicely to some of the things Jeff and Mark said. So, I’m working on algorithmic recommender systems as part of platform governance. So, historically there’s been a tendency to think about content moderation in terms of content removal and there are plenty of very important debates ongoing about what is legitimate content removal and what isn’t. But there’s been a more recent shift which my research is part of, to think about the way that content gets curated beyond purely when it gets removed. So, the way it gets recommended to us, the way it gets amplified, the way our attention is shifted through these – what have been referred to as digital nudges. So, within that space, I’m looking at two specific platforms.

I’m looking at book recommendation on Amazon, and streamer recommendation on Twitch. And I’ll explain a little bit why I think those two are interesting and how they link into some of the points made previously. So, on amazon I’m very interested in the way books around contentious historical issues are recommended to us. Might surprise you to know for instance, that when you go to the women’s studies history category, the number one bestseller for a long time now in the months I’ve been doing my work is Ben Shapiro, the notorious right wing anti-feminist commentator and the second bestseller is his sequel to the same book, which by the way – facts don’t care about your feelings and facts still don’t care about your feelings. When you click on those books you are suggested further books by Ben Shapiro and similarly problematic anti-feminist writers. This matters because it’s not just a problem that’s limited to Amazon. I was fortunate enough to be part of a collaborative project earlier on in the year where we looked at Youtube recommendations, and my colleagues and I – and two of them are here, so I’ll give them a shout out – Ariadna and Jean, we were looking at what sorts of recommendations Amazon serves up in its feature when we searched for feminism. And it was interesting. So, it was a mixed bag, but a number of anti-feminist commentators did very well. This was interesting because we’re doing it, we are also looking at what Youtube was serving up when you searched for coronavirus and there you had repeatedly authoritative media sources. We know that at this time Youtube had very publicly made a decision to promote what it calls authoritative sources in its recommender systems. This is part of it. It had a four hours of responsibility policy and one of them was to boost authoritative content, but platforms like Youtube, like Amazon, have historically been quite reluctant to enter into these fraud sociocultural spaces like feminism. And so, what I’m trying to look at in my project is what would responsible recommendation look like around those sorts of fruit questions.

My second case study is looking at streamers on Twitch. You might think that’s an odd choice but twitch is very interesting because it forms part of – again – what we know has been quite a toxic sort of misogynistic gaming culture, and twitch has recently made several improvements to its recommender systems, as it defines them to make them fairer. The way twitch seems to be defining fairer is to allow smaller streamers more visibility on its home page recommendations, but simultaneously it’s also introduced a new series of identity tags that streamers can attach to their content. So, I can now for instance, if I were a gamer, go on Twitch and tag my content as female, and that was done to make people more discoverable by their identity group. And it was done on the request of streamers themselves. These are totally voluntary tags and you don’t need to use them, but what this is going to allow me to do with my great supervisors is to look at how fair and how diverse Twitch’s homepage recommendations are with diversity being understood as an identity question, a sort of social diversity. And if I have one second left, I guess I really want to, I want to link it to some of the points – really important points Jeff and Mark made. So, Jeff’s point about these are multi-stakeholder systems and you have several, also third parties that are impacted, that’s a really important point. And it’s this argument some, Sylvia Milano has called it the social externalities of recommender systems, that I as an end user, I know we don’t like using that – I’m using it – should be satisfied by the search results I receive for the recommendations. But there is also a broader social cost that we carry when our systems promote content that reduce the value we hold women in, or make our gaming spaces less diverse and keep them very homogeneous. And the next point that I found interesting was around the valuation that Mark ended on, and how do we evaluate these systems when also the tools we have – the ways we’ve traditionally thought about these things, like diversity – have sometimes been quite narrowly technically conceived. But we know there are broader social questions at stake.

Prof Patrik Wikstrom:

So, the project that I will be talking about is called creativity diversity and equity, and it’s about music streaming platforms. And do you know that every day, sixty thousand new songs are being uploaded to the Spotify platform?  There are at the moment, about 70 million songs available for Spotify users to just start listening to straight away, and it’s quite easy to show that such access to catalogue, easily creates a situation where a small number of well-established artists maintain global success at the expense of less fortunate, less well-established artists that are unable to cut through the noise. So, regardless if that is true or not, it has led to intense criticism from various parts of the music industrial establishment from creators, and what have you. And if you look at the numbers there are about 8 million artists, 8 million artists on Spotify themselves, report that less than one percent of these eight million artists generate 90 of their revenues. So, it’s very top-heavy, and this may sound like a problematic number, but Spotify is very proud of the fact that it is 57 000 artists that are generating the 90. And the reason why they’re proud is that for, well, six years ago the number was just 25 percent of that. It’s four times higher now. So, they look at this as a huge success and they use these kinds of trends to counter the criticism from music creators, and they claim the reason for the trend is that a successful design and implementation of their music recommender systems.

So, how are these music recommender systems implemented on the platform? Well they are playlists. There are lists of songs and some of the playlists – as I’m sure most of you are aware of – are curated by humans. And they are focused on a particular theme, a genre perhaps. A place, a mood, something like that. And some of them are automatically curated based on the user’s listening history and the kind of principles that Jeffery and Mark would talk – Jeffrey were talking about. So, the users listening history, their demographics and location, their social network. And a range of metadata associated to the songs and playlists have become hugely popular, more than 50 percent of all streams on platforms – if we generalise like that – are started from playlists these days. So, it’s very influential on the music that is being played on these platforms, and when the platform operators claim that their recommender systems make music experience more diverse, what do they actually mean? They generally mean that music listeners are enjoying music from a greater number of artists than they did in the pre-streaming era. That’s what they claim. And put in another way, you can if you want, say that the user’s musical tastes have become increasingly omnivorous, not taking this into the music sociology space, but it’s that this the tension between the platform’s claims and the creator’s criticism is the starting point for the project that we’re working on. And the core question is quite straightforward – how are music streaming platforms shaping music diversity?

And the issue of diversity of popular music has a long history in academics, right. Since the 60’s, there were studies focused on diversity as the number of artists that became popular, the number of songs that became popular, we’ve had studies over the years. I’m focused on geographic aspects of diversity or genre-based aspects of diversity. I’ve done a bit of work focused on what music actually sounds like to understand, if the acoustic properties of music is changing over time. And the conclusion of all these studies have been that over the past decades music listeners listen to songs from a greater number of artists, from a greater number of countries, but the songs sound increasingly similar, which is interesting and important for sure. But there are some significant limitations with these studies and one is that most of them are on aggregate level. They’re on industry level. So, that means for instance, that they cannot say anything about different types of music users. They cannot say anything about the avid music listener versus the casual music listener, which probably are quite different, in terms of how they experience the platform. And the second thing is that they only use data from popular artists. The one percent that we talked about, if they’re lucky. The one percent, probably less than that even, and the reason for these limitations is primarily restricted access to data. And before the streaming error, the data wasn’t simple, it wasn’t available, it was impossible for very obvious reasons. But these days the data is indeed available. But it is locked behind the closed doors of the commercial streaming platforms. So, this is one of the aspects that I find really exciting about the project that I’m working on, because we’ve been working very hard. It took actually four years to establish a relationship with one of the streaming platforms, which means that we now do have access to their crown jewels. And so it’s a music streaming data for three years, about 150 000 users, and something like 3.5 billion streams, to analyse these kinds of things with. So, it’s a massive data set which means that we can go where few researchers have gone before. So, one of the things that I am particularly excited with, over time right now, is questions such as do users who rely heavily on algorithmically curated playlists for their music listener experience develop more or less omnivores musical tastes than other users over time? So, this is a question that very few studies have been able to address previously, so I’m really looking forward to work on that. This is the early stages in the project, and I’m really keen to collaborate with others as much as possible on this, so if you’re interested, curious, just let me know and we can get together on this.

Dr Kylie Pappalardo:

Thank you to all of our panellists. So, we now have about half an hour for a Q and A. As before, if you have any questions, if you’re in the room or online, please ask them through Slido. The event number is 101 if you’re not in there already. And I will ask them to the panel. I can see them here on the screen so while I give you a few minutes to do that – and I’ve seen a couple come through already which is wonderful, thank you. I do have a couple of questions that I’ve prepared for the panel that I might just kick off with.

So, the first one I think relates quite a bit to what you were saying Louisa, which is around a concern that we sometimes see that recommendation systems can radicalise people, by taking them down rabbit holes of increasingly extremist content in response to their interests. And if I were to make that relevant to music, Patrick, in a way I think I’m starting to lead towards that last question you’re asking, right. Which is, if people are relying on the algorithm, are they increasingly seeing more and more of the same thing? And if that is the case, how should we think about the responsibility of recommender systems to promote a diversity of viewpoints?

I think most of the panellists today mentioned diversity at some point or another, but I have a feeling it means different things to different people. So, I’m also interested in what diversity means to you, and how you think recommender systems should be incorporating diversity. So, that is open to any of the panellists. Louisa, do you want to kick off?

Louisa Bartolo:

Yeah, sure. So, I think what’s interesting in the way you link this question of which is an understandable link, about rabbit holes and diversity, there’s a very interesting paper by Natali Helberger who does a lot of work on recommender systems in the context of news recommendation, and one of the things she’s argued is we need to talk about diversity with a mission. So, this idea that diversity in and of itself is a sort of intrinsic good that we’re all working towards, is slightly odd. So, you could imagine in a situation where there is a mix of pro fax and anti-vax content, you probably are not going to be striving for a 50/50 viewpoint presentation there, and really when we talk about rabbit holes on the whole, our concern is around things like extremism and being pushed into problematic filter bubbles and polarisation. So, it’s often with additional kind of value judgments and concerns we have. And so, I think my point would be in terms of how I think about diversity, I think there has to be another set of questions that come along with diversity, that’s what is your ultimate goal and I think in the context say, of news recommender systems, Natali Helberger’s work is looking at what sort of democracy do I want to live in. I think in the non-neo space, which is where I’m working. We have questions around social justice, inclusion, representation, so that’s how I would.

Dr Kylie Pappalardo:

Great, thank you. Marc or Jeff, did you have anything you wanted to add to that?

Yeah. So, yeah, I guess in terms of diversity from the perspective of I guess the algorithmic side, usually that would mean serving up content that’s different from what the user might typically see. And I think Louisa’s point there is very pertinent in that, you know, diversity for the sake of diversity, it can be meaningless because particularly in the context of say, you know, terrorism or things that you know the society may not necessarily approve of, you know, having diversity there wouldn’t make a lot of sense, right. And in terms of the alchemic side, it’s actually you know – we can look at the previous history of people and also what Patrik was talking about in terms of the less popular producers are visa producers, that could be also a case where we can think about serving up diversity, right. So, diversity as in we want to serve up producers that typically are not recommended. But there’s a, I guess, a flip side to this in that for – particularly for commercial platforms – they’re there to make money, right. I was getting us to look at the videos so they could serve us more ads, etc. And if they serve up a lot of content that maybe the users might not find totally relevant, then you know, the users might drop off. And so, they also will try to optimise for that, and that sometimes will be in conflict with what might be social norms like circumstantial norms. And so I guess you know, they might, in order to get – the auto corporate is this, why did that first refuse. You know, either the business model has been looked at or regulation, right. Where we say look, like in judicial media, you can’t just show anything you like, but you know there’s kind of regulations and social norms that you should respect.

Prof Mark Sanderson:

I think in terms of search, I think what you’re seeing is the search engines changing over time and I remember many years ago you used to – if you typed in the phrase miserable failure, the white house website became number one. And actually, I got a brief chance to talk to the guy in charge of the algorithm at the time and said is that really the right thing to be doing? You know, I know it’s George W Bush in charge but really? And he said well it’s what the internet wants, and he really thought that that was the right way to go. And you know, you wouldn’t hear that from a search engine today. You would hear a very different view. Now are they in the right place? Are they in a place where we would all ideally like them? Not necessarily, but there’s definitely been a journey over time and I don’t think the search engines are finished in terms of diversity. There’s lots of topics there because you know, there’s kind of like diversity of novelty. So, do you bring in an established page that’s getting lots of clicks? Do you bring in something new which is a bit like the music example we were seeing earlier? You know, my favourite acronym ACL, anterior creature ligaments or American Christian Librarians. There’s an ACL place down here in Melbourne that actually a car fixing place. So, you’ve got to diversify on those different kinds of things, as well as the things that you’re talking about. These kind of things like political diversity.

Dr Kylie Pappalardo:

So if I just um jump on that point quickly, Mark, in terms of representation on search engines. So, Safiya Noble for example, talks about the images of white men that are returned when people search for executive or the white collar jobs that are primarily recommended to white people. And the challenge is that this might accurately reflect society right, because we live in a biased and unfair society, and it also reflects therefore the material that some of these models were trained on. So, I was wondering what opportunities you see in the future to correct for these type of established biases, or what are the challenges there, with search engines in particular?

Prof Mark Sanderson:

I think the wonderful work that you guys do, finding things that are wrong with the search engines and thinking in ways that I don’t think is actually makes a huge difference. You know, there have been countless examples of investigations, either by journalists or by social scientists, that have thrown up problems that the companies have then worked to fix. I mean look, you know actually, I’m setting up a slack channel for our exec and was stripped by how hard it was to find a nice icon that was gender neutral when you type in the phrase exec. So, I know exactly what you mean. In terms of jobs, I was speaking to someone from a job search engine who actually was saying to me that one of his big fears is his boss standing up in a shareholder meeting and somebody coming up with an example like you described, and so they’re certainly you know, this particular one I won’t name them but there was certainly something that was concerning them greatly, was to try and avoid those kinds of PR disasters. So, I think there’s a growing pressure on these organisations to fix them, but you’re absolutely right you know, you type in the exec and ask Google to give you icon like images, it’s kind of depressing what you get back.

Dr Kylie Pappalardo:

Thank you. We’ve got a lot of questions coming in on the app which is great. Quite a few for Patrik actually. So, I’m going to start with one from Simon Elvery, for Patrick. Does your research on Spotify and diversity extend to the payment models that they use? They are automated and algorithmic, after all.

Prof Patrik Wikstrom:

Yes. Well, yes, but I’m not sure if it’s in the way that you think of. What we are doing in terms of the payment models is that we are looking at alternative payment models that might solve some of the problems with the current payment models on streaming platforms, where revenues end up in with the most successful artists and not as much as they might. Well, I should have with the less fortunate ones again, and we’re looking at models where a song that is played from a playlist is generating less revenues than a song that is found via search. And the general thinking behind that is that the one is casual listening and it’s not as important in some way than the other one that is found via searc, and where someone actually knows that they want to listen to his particular Beyonce song. And that should be probably paid, while some song that has just been casually played for that user shouldn’t get that much funding. But these are alternative models that we speculatively look at. What would happen if we do this? Who would benefit? Who would not? So, that’s how we’re using the recommender systems, and connected with the payment models.

Dr Kylie Pappalardo:

Great. I have another question about Spotify, although I think the implications extend more broadly, actually, from Mark Andreovich. He says: I think it may have been Spotify a while back that claimed to be able to infer political preferences of users. To what extent does an examination of the cultural side of recommender systems connect with questions about political polarisation? I guess Patrick or Louisa, or if you have thoughts on that?

Dr Patrik Wikstrom:

Yeah, I remember that claim or report from Spotify, which is interesting, if we put it like that. It is simply not part of this study. We are staying with the musical practices and looking at how musical practices change over time. And we’re trying to stay away from those influences between musical taste and other or musical practices, I should say, and another

opinions or political positions or whatever it might be, that you might be able to infer from that. Although yeah, I’ve seen that research and it’s quite common to do that, right, but it’s not part of what we’re doing, that’s not our ambition at all.

Louisa Bartolo:

I think just – I can’t speak to the Spotify study, but in the case of Amazon book recommendations, I know for instance, with one of our search terms is critical race theory and we know that that is heavily politicised as an issue. Feminism is also bound up with culture wars. So, I think when you go into these cultural spaces that are supposed to be non-political, they very quickly become political and we can think of politics with a small p, you know. So I think we – and I’m almost, I mean my background is political science so maybe it’s just the way I am, but I think we have to grapple with these questions. So, I know it’s fraud territory, but yeah.

Dr Kylie Pappalardo:

A question I think mostly for Jeff and Mark, from Tim Graham. I’m curious to hear your thoughts about what authority users perceive recommenders systems have. So, for recommended content, do you think that users see that as a source of authority, or really just as a loose suggestion?

Dr Jeffrey Chan:

So, maybe I can kick off. So, I guess probably it depends on the thing we’re recommending. So, I guess going back to the Spotify example, that maybe you know – or on Youtube, right. The recommendation there, because it’s not so much what’s called a – it’s more like a casual kind of entertainment kind of thing. And in that situation the user may not see that as significant, whether the system gets it right or wrong. And so, in that case it’s just thinking okay you know, just recommend this and I like it, okay I’ll watch it. I don’t like it, then let’s go the next one. But maybe for other things, perhaps for news recommendation, etc. Then the user themselves may have a little bit more trust, particularly if it comes from I guess, a repeatable source, or that they in the past have experienced with it, that they think okay you know, this source of media, this source of recommendation seems to align with my preferences, whatever those preferences. Of course you know, we could be enhancing the echo chambers but they still think it’s a more authority source because it kind of aligns with what they’ve been thinking about or what their interests or beliefs are.

I think the search engines have become more aware of the importance of them as an authority, as a place where you get authoritative information. And for me, one of the big changes was about five years ago when I was at a workshop and a guy from Google came to talk about the changes they were making to medical searches and how they were basically giving up on the algorithms and going to authoritative sources to help them. And they were basically identifying particular sources that they trusted. They first started creating these summary pages that dealt with symptoms from particular illnesses because they just couldn’t, you know it was just too important. They decided to sort of worry about – try and sort of make it allow the algorithms to just make the selections and you see that expanded.

They have this very sort of strange phrase, they talk about your money or your life pages, which is basically pages that are either to do with your finances or pages to do with your health, and they placed increasing emphasis on the accuracy and the quality of those things. You know, some really interesting questions like they actually said to some of their assessors, would you trust your credit card with this site? Would you give this treatment to your child? And they sort of tried to really push people, to try their assessors, to really try and come up with a good assessment. So, that works in some areas of search, but then there are plenty of other areas as we’ve already discussed, where you use the algorithms and then as was pointed out earlier you know, web pages are written to be found and people understand that there’s a competition of ideas. So, people will try to get pages to make sure that they appear high for particular keywords, even if those are contentious keywords. So, you know, it’s always going to be – there’s always going to be that tussle between people trying to get content ranked as high as possible, and the algorithms, and Google’s methods to try and sort of put out something, that a more sort of bootable – an authoritative set of content.

Dr Kylie Pappalardo:

I have a question for Patrik from Charles Pigeon. Is it intentional to use the word omnivorous to describe music listening? Are you using it as a metaphor of media and consumption

Prof Patrik Wikstrom:

I’m using it because to get around the issue of using diversity as we previously discussed, diversity is challenging when we talk about media diversity or cultural diversity, and it has all kinds of different dimensions and issues with it. Using the omnivore or that – it brings another set of problems for sure. But on the other hand it takes you straight to music or to culture, and you can talk about more or less omnivores tastes and practices in a way that is less complex, perhaps, than the diversity issues. I’ve had a number of times having to spend quite a lot of effort on defining diversity and with more or less success. So, this is an attempt to make that situation a bit easier.

Dr Kylie Pappalardo:

Okay, I’ve got two questions about transparency that I might ask together and throw to the whole panel. So, one is from Dan Angus. Platforms often cite the potential for their algorithms to be gained in response to calls for greater transparency. What is your take here? And the other question comes from Brooke Coco to the issue of transparency, to what extent are these recommendation algorithms interpretable, or are they black boxe? And what are the implications? So, I’ll throw that to the panel.

Mark what do you want to go?

Prof Mark Sanderson:

I think the algorithms, you know, if you go talk to the people who run the algorithms, they will certainly give you broad overviews. And when I mentioned there are 300 features, they might even break those features down into groups and say well these ones really matter a lot, these ones less severe, and so on. So, you can get a sense of what they’re like and maybe – there’s an old professor I knew who was involved in a patent case with a search engine and he told me that he managed to actually see the code. It was part of a patent case as an expert witness and he said you’d be surprised how straightforward it is, actually For one of the very famous search engines, you know, he said there’s no magic in there. It’s basically what we do but just mixed up in a slightly different way. So, I think the academics in general have a pretty good idea of how the search engines work. The exact weights and the exact balance of things will be what the companies know about, but I think if you go talk to you know, the computer geeks like Jeffrey and I, you’ll probably get a pretty decent idea of how the algorithms actually work.

Dr Kylie Pappalardo:

To Dan’s question, do you think that the algorithms can be gamed?

Prof Mark Sanderson:

Yeah and you know, certainly. I had a summer with Microsoft about 15 years ago and that was a huge concern of theirs. They were struggling to stay on top of the spammers at that point, and they were saying look, you know, I know we’re logging everybody. I know we’re keeping track of everybody, every click they’ve ever done. They said we need that information to deal with the spammers. You know, there are people clicking on our ads to try and force us to give them money. There are all sorts of things going on. So, certainly spam and people gaming the algorithms is something that you can’t ignore, you know, they do need data and a level of secrecy to try and stay on top of that.

Dr Kylie Pappalardo:

Jeff, did you have anything you wanted to add?

Dr Jeffrey Chan:

Yeah, so, I think Mark covered it pretty well. But just to add a bit more about the algorithm side. So, the technology there you know, I think most of the tech companies more or less use the same type of algorithms and etc. And as Mark said, the actual physic – the actual implementation is more of a tinkering of the general ideas. So, the general ideas about recommendation. I’m trying to model how users behave etc. So, in terms of I guess the you how the company is approaching, etc, probably more or less, it’s kind of known how it’s done, right. But as Mark said, you know. Going to Dan’s, the question about whether people can game this system, absolutely, right. If for example, you do know how the different weights, exactly how to engineer certain things. What things to emphasize, what things that they emphasize, if you know – if that is known then it makes it much easier to game the system. Particularly given that there’s a general knowledge about how these things work. If the actual formulas and equations and algorithms are known, then you can say look, if I do this then I know that it will for example, increase the weights to this thing and then that might lead to a recommendation for my set of items right. And so I’m definitely gamble right, if the algorithms and formulas are known.

Dr Kylie Pappalardo:

I suspect as researchers outside of computer science, you both have opinions about whether these algorithms are black boxes or not.

Prof Patrik Wikstrom:

Well, yes. I think it makes sense to consider them as black boxes, but we probably all know the amount of algorithmic gossip or folklore, or whatever you want to call it, that is an important part of creative culture on Youtube, or TikTok, or whatever it might be, where creators work very hard to game the recommender systems, and where that feedback loop between user behaviour and recommender system is an extremely important part of the the culture and practices on that platform. Thinking particularly about TikTok, which is fascinating in the way that the recommender system is front and centre, it’s extremely opaque and as I think, as an interesting example of the value of the recommender system for TikTok is if you remember when Trump wanted to shut down TikTok in the US because the owner of TikTok were basically selling the TikTok platform in the US, that was not the problem But they were very clear about the fact that they would not include the recommender system that was the core of their entire business, which goes on beyond TikTok.

Louisa Bartolo:

So, I think I’ve been very inspired by the work of Tana Buhar and I hope I didn’t just destroy the pronunciation of her name, but she has just argued that rather than kind of thinking about what algorithms are, we could shift to thinking about what they do, particularly for humanity scholars like myself, and what we’re really interested when we ask many of us on this sort of – in the public – when we ask what algorithms are, how they work, we’re interested in their outputs, what they’re doing around us. So, I approach this question of transparency in my research in terms of what can we observe in patterns over time. So, rather than trying to understand exactly how algorithms on Twitch work, or Amazon’s recommender system works, I’m interested in what sort of outputs they give us. And I should also refer to the work here of Ben Adrida and Janette Hoffman who have pushed this idea of platform observation.

So, I think regardless of whether they are – I’m sure they are interpretable to computer scientists of course, I think when we’re thinking about transparency to the public or accountability to the public, we have to think about a different set of questions of how we understand and make sense of these systems.

Dr Kylie Pappalardo:

So, we have two minutes before afternoon tea. There are three more questions on the app. I’m going to just ask one more and the other two, I think there might be opportunity for the panellists to answer afterwards on the app. So, last question is probably I think to Jeff and Mark from Ashwin Nagappa. I was wondering what your thoughts are on how the physical devices that people use and all the platforms like Android, IOS, etc, the role that they play with respect to search engines and recommendations, does the device impact either the recommender system used or how people experience that?

Prof Mark Sanderson:

Probably. I mean it’s a good question. I’m not sure what the answer is. I mean I’ve got an Android app and I’m probably logged in on my google account, so maybe there’s some cross application information being used there that’s maybe altering my Google search. But I have to say, like my kids have got iPhone, I’ve never come across a search that somehow works for me but doesn’t work for them. So, not immediately. Maybe some location stuff, I don’t know. So, yeah, I mean in general, IOS is a bit more privacy focused than say Android is, so maybe there’s some impact there.

Dr Jeffrey Chan:

Yeah, so, I don’t know either. I have devices and that use both operating systems, and the interface is different. Similarities but have difference. But in terms of the type of recommendations. I’m not sure if there’s a difference on the different operating systems, particularly you know, when some of the apps also recommend the same thing. So, like Google maps versus the Mac OS map recommendations, right. So, the functionality there is more or less the same, but in terms of how they recommend, yeah, I’m not sure there’s a difference.

Dr Kylie Pappalardo:

Okay. Thank you. So, apologies to Julian and James Meese, your questions came in last and there’s no hierarchy here except first in first served. So, thank you to all our panellists for an excellent panel, really interesting discussion on search engines and recommender systems. So, please join me in thanking the panel again.

SEE ALSO