EVENT DETAILS

News and Media Symposium – Search Personalisation and Polarisation
6 October 2021

Speakers:
Prof Jean Burgess, QUT node, ADM+S (chair)
Prof Axel Bruns, QUT node, ADM+S
Matthias Spielkamp, Co-founder and Executive Director, AlgorithmWatch
Watch the recording
Duration: 0:59:23

TRANSCRIPT

Prof Jean Burgess:

Okay. Welcome back everybody, here in Kelvin Grove. Welcome back crew on Zoom for our last session of this part of the symposium, the session on what’s the title of it – Search Personalisation and Polarization. So, a little bit different from some of the previous sessions, we’re going to have two slightly longer presentations and then a bit of discussion.

I need to introduce Matthias. So, I actually need to have his bio in front of me sorry about this everyone. Hi Matthias, welcome.

I’m just opening a Google doc if anyone would like to tap dance while I’m doing that.

Okay, so, we’re really pleased to be able to speak with you this afternoon. Matthias Spielkamp from our partner organization Algorithm Watch in Germany. Matthias is co-founder and executive director of that organisation. He’s testified before major European government committees and is a member of the global partnership on AI. He serves on the governing board of the German section of reporters without borders, the advisory council of Stiftung Warentest, and the whistleblower network. And the expert committee on communication information of Germany’s UNESCO commission. Among many other things. So, as you can hear, Matthias is really committed to making a major difference in the policy and regulatory environment surrounding algorithms and AI in the European context. We’re very pleased to be collaborating with him on our search engine personalisation project, and Matthias, I’ll hand over to you.

Matthias Spielkamp:

Thank you. Thank you very much. Can you give me a sign whether you can hear me okay?

Prof Jean Burgess:

We can hear you okay.

Matthias Spielkamp:

Great. And now the next challenge will be to share my presentation, and it’s really an odd feeling to be presenting from several thousand miles away, to actually a room full of people. Now that’s a change. So, let me find my presentation. Here we go.

Now I hope everyone see that. Okay, so, that thing the platform’s party later donations between self-defense and acts of desperation, and I hope it will become clear why I chose that type. So Algorithm Watch is an organisation, a civil society non-profit organisation, and we say that we watch, unpack, and explain the algorithmic systems on justice, fairness, equality, individual autonomy, and the public good. We call out misuse and advocate for the benevolent application of these algorithmic systems, because they must be used to benefit the minimum, the few. That’s trying to put many things that we are active on, in a nutshell. And I’ll pick out one specific application today that overlaps, or works towards our goal of ADM systems, that benefits individuals and society. Because we do focus also on platform governance, and here the question of course is, how can we make platforms more accountable while at the same time protect users rights and collective interests? Because there are a lot of rights affected here. Not just individual ones but also those of society. So we said let’s find out what their algorithms do. I mean, the platform’s algorithms. And we created this idea of what we call data donations. And start with an example, the first data donation that we did as an organisation was in 2017 and it was funded by state media authorities in Germany who are overseeing first and foremost, traditional broadcast media, radio and tv. but of course are more getting into the field of online platforms. They were looking at podcasts and web tv and such things, and of course there was the, not the idea but the necessity, that they were also thinking about if we are regulatory authority or authorities, how can we deal with the online space. And we also collaborated with the technical university of Kaiserslautern. In that project what did, we produced a plug-in or an add-on, Google Chrome and Firefox, and people who were asked to install on their computers – and this plugin would then send search requests to Google. It was the runner to the German federal elections four years ago. So, I mean it was basically taking place exactly four years ago. And we asked the plugin to search for, or to send as a query to Google, these search terms. You may recognise some of them like Angela Merkel, and you probably don’t recognise others like Shepard Stamier or Alice Veidel, because they are not so internationally known. But you can see that these are names of politicians who were running for election. And also the parties that were the most popular parties at the time. And how did we get enough bonus. We collaborated with mainstream media in this case Der Spiegel in Germany, which is also online, one of the most popular news outlets, and they wrote an article about this and said please support this campaign, download that plugin and start donating your data. Because all the stuff that they collected, the users collected on their computers via the plugin, was then sent over to us.

So, we had almost 4,000 donors in the end. It was only about 1500 of whose data was then usable for us, but it still added up to more than 8 million data sets that were available for us to analyse, or for the colleagues at Technical University of Kaiserslautern. And the results were that – oh I forgot to tell you what we’re looking into by sending all these queries to Google and analysing the results – we wanted to find out what the level of personalisation was that Google was trying to achieve, or that Google was doing with these search results. And we found out that on average between one and two of these so-called organic results differ between users. I don’t know whether in this community, I need to explain what organic is. I dislike the word because there’s of course nothing organic about this, but this is what it is called in the search engine business. When you have search results that are not promoted for advertising reasons, that are paid for, and things like that that, Google argues. They are just sorted by relevance. And only between 1 and 2 or many, of these search results differed between users. But of course they didn’t differ, or not of course, but they didn’t differ in a way that for example, in one case they were on spot number one and the other one on spot number ten. It was rather, you know, it number three or number four. Something like that. And of course, then the question with these results is this a lot, is that little. And you know, if these search results differ, what difference does it make. And I have to tell you and this is something that we’ll discuss later that we don’t have too many good answers to this, because the entire universe of assessing. What this means is a little more complex than just finding out – which is complex enough – how they differ in ranking on Google.

Now there is an English language publication on this, so for those of you who are interested in the details of this experiment, you can look it up under this link. And of course I’ll share the presentation with everyone who’s interested after the event today. But I’ll focus a little on the short things of this first experiment, that we did. So, first of all, it was done in a rush. The money came in very late, so we had to make some quick decisions and after the fact I’d say we did not have a good choice of search terms, because they were too generic. We should have used something that is more contested like immigration or old age pension funds, and things like that.

Also the problem was a sample structure, because there’s a certain audience for these mainstream media, especially Spiegel and their internet section where this appeared. So, it was clear – and also there are not a lot of people who would even participate in something like this. This is a major obstacle and again, we’ll talk about this a little later.

Also in our experiment there were no dynamic changes to search terms possible, so we were not really able to react to any current political developments during these elections which is, you know, very unfortunate. You can imagine. And we were not able to collect a lot of demographic data about our donors due to privacy constraints. We had to be really, really sensitive about privacy otherwise we would have had a lot of personalised data, and that is highly problematic under the current data protection regime. And also you know, admittedly, we had some technical glitches that made some of the samples or some of the donations that we received unusable for us, which means we learned a lot. And this is always good if you do an experiment, like this was the first time so we would definitely not say that this was a failure. It was a success as a campaign and also as an experiment for ourselves. But the results and what you can draw from them are quite limited. Ideally, we think there could be a permanent representative user panel that is using such a technology established with the option to quickly change search terms so you could do permanent monitoring and react to political developments, and then see what happens. And that, we think, would tell us more than the experiment that we did. Or you can also change the law and increase the transparency of ADM systems. I mean, I don’t need to explain again, in this community, what ADM means to entire centre is called like that,  but usually people talk about AI and algorithms, but of course we are talking about automated decision-making systems here at Algorithm Watch, ourselves.

So, what we wanted is that the lawmaker introduced legally binding data access frameworks to support and enable public interest research by academics, journalists, and civil society organisations. And this is why – and also develop and establish approaches to effectively audit such algorithmic systems. And this is why we started a project that is called the governing platforms project. It is in a sense, ongoing in the first leg of it, which about 18 months we had a lot of scientific studies done on media and communications on law and also on practical implications, and we published in the end, the position paper putting meaningful transparency at the heart of the digital services act. Which is the ongoing legislation in the European Union, why data access for research matters. And how we can make it happen. And this was signed by a lot of international renowned academics, some also here at the centre, and a lot of civil society organisations. And Margaret Vista, the vice president of the European Union and in charge of the platforms and the DSA, gave a keynote speech at our closing event and basically put forward a lot of similar demands. And in the end this resulted in the article 31 of the data of the digital services act on data access and scrutiny. So, we would say this was quite a success. Not our own, only because there were many other organisations and academics asking for the same thing, but we do think that we were part of it. And of course, you know, we were able to use these real life scenarios of the data donation projects to tell people that it’s not enough to look at these from the outside – these systems from the outside. Meanwhile, we were already conducting a new experiment. We had some monitoring of Instagram where we did something similar, a similar donation with people looking at their Instagram feeds, collecting data from their Instagram feeds and handing them over to us. And we produced a couple of interesting stories. The first one was picked up internationally, it was called undress or fail: Instagram’s algorithm strong arms users into showing skin, where we said that there are quite some clear indications for the fact that the Instagram algorithm doesn’t just use people’s preferences, but has its own preference in a sense, or of course the company has a preference in promoting images with you know, people showing a lot of skin – women in bikinis, men with BHS. And by that, trying to improve the engagement on the platform. And that was picked up quite widely. We then changed focus onto politics in the Dutch election and the first time that we found that political posts posted by politicians were doing a lot left on Instagram than stuff that you know – just going about their hobbies or being together with their families, or things like that. Now Facebook’s reaction to that was that we reviewed algorithm watches report and found a number of issues with the methodology report, fundamentally misunderstands how we work, but despite of all this we go even further to deeply study algorithms and work with academics and other key biases. Yeah, sure you do, and we continued our experiment with a last one in Germany that we did in collaboration with Saito, which is one of the major newspapers and news outlets in Germany, and we had a similar approach to the one in the Netherlands. So, we again collected the posts of certain politicians and wanted to analyse those with the partners at the media company. And then we message from Facebook and it says we really request that you take the steps to address this issue to ensure compliance with our terms, remove the extension download from your website and Chrome store, stop collecting data by deactivating the installed extensions, and delete the data you’ve obtained. And we must ask that you take these steps, or you may be subject to additional enforcement action. And honestly, at that time it was in the middle of the summer, I was on vacation, Nicolas who did the experiment was on parental leave. We said okay, so we give in, you know. I’m not going to go up against Facebook in court, weighed our options and said we comply. And that was then also in connection with the NYU ad observation. It’s something similar too. Researchers in the United States who had already received a cease and desist letter, and in that case, patent again with legal action just disabled access to these researchers on their platform and the tool they used, and they all did this for privacy reasons. And then the 50s weighed in you know, what you know, very rare case that they published an open statement and said it’s not true you know, we didn’t ask Facebook to do this, and it’s not to comply with privacy. I actually, or we, support this public interest research. And this is then when we decided to go public with our case and say you know, this doesn’t only happen in the United States, it happens in Europe as well. And we went public, and there was a lot of media coverage about this. And we started an open letter asking that platforms must stop suppressed by interest. But we did – before asking the government to

enhance the data access from article 31 to protect research about digital platforms and it was signed in the end by more than 6000 people. So, again, that was quite a success.

At the moment we are receiving a lot of calls from the European Parliament from the European Commission, from member state countries, so hopefully we can turn this into a success after all. At least on the policy-making front. And if you’re interested in the results of what we found out with the Instagram monitoring in Germany for auditions, we have an English language summary of that. And we also did a parallel one on Youtube with a different methodology, a different technical tool that is called data scope. And we also have first results from this, as well, more coming up. And I went a little over time. I hope that’s not too bad. If you would like to keep up with what we’re doing, we have a newsletter that you can subscribe to. Thank you very much.

Prof Jean Burgess:

Thank you so much, Matthias. I’m not going to reintroduce Professor Axel Bruns, but I’m just going to remind you to please be putting your questions and comments into the Slido app as we go along, so that hopefully we’ll have time for a few of them, at least. Axel.

Prof Axel Bruns:

Fantastic. Thank you so much, and thank you, Mathias. It was great to hear from you and to see this work. Thank you for fighting the good fight, on behalf of all of us. We like the project that algorithm watch did in 2017 that Matthias has just talked about, so much that we wanted to do our own version of this as part of ADM+S. and that’s what we’re doing, that’s what we’ve started. So, again, the key questions here are whether and to what extent, search results are personalised, and in what way. Of course there are concerns that always are raised around this, about information inequality, about filter bubbles and beyond, and building on what Matthias has already said, we’re also quite interested in how results change over time and to what extent they react to, of course what’s going on in the world and perhaps on emerging topics. Is there a period where there’s poorer quality results being served before they standardise in some way? So, these are all areas of interest.

For our version of this project, what we did with our version of the project, The Australian Search Experience, was to extend the number of platforms that we cover, Google search, Google news, Google video, and Youtube. We are still very much taking the approach that Algorithm Watch took, which is basically to ask users to install a browser plugin. You might have seen a couple of months ago, particularly the media campaign that we ran, essentially we had interviews and pieces in various Australian media outlets to encourage people to install this plugin for Google chrome, Opera, Edge, and Firefox. And we launched this in late July. And these are the people from the Centre who are involved, if you’re interested, if you want to get involved, you want to install the plugin yourselves then this is the URL to go to admscentre,org.au. View search experience. If you go there, that’s where it looks like, and as of now we have nearly a thousand users who’ve installed the plugin. Together they’ve donated some astronomical number of search results, so this is the number of search data sets that Matthias was talking about, but multiplied by the number of results that are in it. That’s why it’s a lot larger than the 8 million I think, that Matthias talked about. And yes, as the browser install’s, as the plugin’s installed, this is what will pop up. From time to time it will run its searches, basically piggybacking onto the user’s profile with these platforms running searches as if the user was running them, and then reporting the results back to our server. We also do ask for some basic demographics from users as they install the plugin. Again, very limited and very generic demographics because also while we’re not covered by the GDPR, we certainly don’t want to ask for any information that is problematic in any way, and this is at the moment, what our distributions for participants are looking like. So, a reasonably good distribution across a number of fields, I think.

Where I’d say we still have significant imbalances is on gender. We’ve got about twice as many males as females, men as women. We have a great imbalance towards the eastern states, particularly towards Queensland. Tt this point, in terms of participants, we have as far as party preferences go, so some imbalance. We have a good number of conservatives as well as progressive voters, but the grains are over-represented here, so this doesn’t match obviously voting intentions in Australia at this point. But in many other fields, I think we actually have a reasonably good distribution already, which is promising. But with the number of users that we have participating, at this point, I don’t want to push this too far yet. We certainly want to continue to increase and replenish our user base as well, because we expect there to be attrition, obviously over time, as well, but just for background to what I’m going to show you now, which is some of the preliminary results from this project, might be useful to explain some of what we see, as well. So, I’m going to take you through two very preliminary metrics that we’ve developed for this, which build a little bit on what Matthias has talked about with the analysis that they’ve done in their publications, from their project. But also extend this a little bit further.

So, the first question is really what are the search rankings over time for any one search? So, if you’re searching for a particular topic, what do you see. How this will change from day to day, we could go to hour to hour or whatever as well. But for the moment let’s just say from day to day, and how might that be different across different demographics or other aspects of our user base, as well. For now, I’m talking only about organic results. Again, as Matthias also did, although we are also capturing promoted results, people also searched for these things, boxes and various other things. But let me just limit it to organic results, for now. And again, with the limitations of our demographics, as well. And we can extend this further down the track looking at other units of time, looking at other forms of variations and so on. But for now, let’s just do this. So I can give you some ideas now rather than going through how we calculate this metric, which is based on the idea of rank flow. I’ll just, I think visualise that for you in a second, and of course down the track what we’re also interested in is breaking this down further into demographic distinctions, into different browsers. The question came up this morning whether the browser platforms may have an impact, as well. But essentially, this is what this metric looks like and I’ll just step you through what you’re seeing here. So, this is for a particular keyword. In this case our illustrious deputy prime minister, Barnaby Joyce. So, if you’re searching for the name Barnaby Joyce, these are the results that you end up seeing. Now every line here represents a distinct search result, a distinct URL that Google, in this case, provides. So, that’s what they are. The URL’s are ranked, in this case, simply by how they appear in the list, so that the list, the line that’s usually at the top is usually the top ranked URL, and certainly was on that day. The line thickness indicates how much variation there is. The thinner the line, the more stable the result. So, further down you see more variation. In this case, if there’s a single dot and that result only appeared on that day, that might have been a news story that appeared and then disappeared again, secause Barnaby said something on TV. And the longer the overall list is for each of these days, the more variation there is. Actually in search results we’re looking here, only really at the first page of search results, which normally contains nine results. So, if you’re seeing nine results, then these were basically everything that everyone saw. If you’re seeing 12 or more results, then there was a lot more variation and they all appeared for some significant group of our users. So, that’s the overall kind of logic of what I’m showing you here. And to make this a bit more complicated, now we’re looking at it for Google search, Google news, Google video, and Youtube. And in this particular case we’re looking at the search term Uluru Statement, and you see there that for Google search, for Google video results, are really quite stable over time. There’s some fluctuation, but broadly the same terms appear very much at the top for all of them. For Google news even, there’s also still some stability, although we would expect news to be more fast moving and for Youtube, well the top results are actually very stable, as well. But below that, it’s basically a kid’s drawing. So, it goes all over the place, although it might stabilise again somewhere further down, by the looks of it. So, that’s actually quite a typical result that we see in a number of cases.

Here’s black lives matter, which as a fairly controversial topic internationally, we might expect there to be some more contestation, but again, for search for video, very stable. For Youtube at the top, quite stable and then a lot more fluctuation. For news, more jumping about, and often you see this where it starts at the top and then drops off. And that’s the typical behaviour I think, of news stories. Quite simply, they appear on a day and then the next day there’s something else that’s in the news, perhaps around these topics.

Breaking this down interestingly enough, the device question comes in. We do see some variations there per device, whether it’s on desktop or mobile devices. And I should say we can’t install the plugin itself on mobile devices, but we can spoof the appearance of a mobile device so that Google thinks this search is being made from a mobile device, although it’s our browser plugin doing it on a desktop computer. And we do see there’s some distinctions between desktop and mobile devices in purchase, because a different version of a platform is being searched. So, in the English language, Wikipedia desktop version versus mobile version, for instance. But we do see some other fluctuations that are interesting and that we still need to explain. Critical race theory, another controversial topic, which again you see there on search and video actually quite stable. On news, a lot of stuff that appears and then disappears the next day, and on Youtube it really is kind of all over the place. And that’s to me, quite interesting. Particularly – actually the distinction here between Youtube and Google video, which in theory are both video platforms run by Google but clearly designed differently, Google video appears to be designed more to simply surface whatever the most appropriate result is for a particular topic. Google video is of course designed to make it be sticky, to keep you there. So, perhaps from this I’m suggesting that there might be much stronger personalisation there, based on who you are and you’re interested in. And you see that the lines for Youtube are also much thicker which means much more variation. Ultimately, in the results, just moving on, I’m just giving you this, just to give you a few examples here. This is the search term Covid, and you see some very different behaviours there. For Google search, top results are pretty much always the same. Below that, a lot more variation. Google news, basically new results every day, which in the midst of a pandemic is actually perhaps the expected behaviour, as well many of these news links actually will link to live blogs and the latest releases of infection figures and so on. Youtube there too, you see stuff appear and then disappear the next day. Again, partly because Youtube for this keyword serves a lot of news video content, as well. And Google video again, is slightly different but also very centric on news videos that are relevant one day and then disappear the next. So, these are the sorts of variations that we see. Now, here I’m breaking down Google search, also per state. Because for search that actually explains a lot of the variation. Google search for the keyword COvid is very strongly personalised by location, and that’s something we’re seeing already. So, if you’re searching from Queensland you get Queensland health advice. If you’re searching for New South Wales, you’re getting some health advice, and so on. And that actually is a very significant factor here in the personalisation that we see. Per state, there’s still quite a lot of variation going on, but it’s actually, there’s much more stability still compared to the national aggregated results. So, there is, by looks, also quite a lot of curation going on here of the results. In fact, one thing to say with Covid is particularly if you search for Covid, yes, there are some organic search results on the Google search page, but there’s also a lot of other stuff there that isn’t so much search results, as just covered background information. So, most users will actually have to scroll quite a bit before they even see the first organic search result on the page.

Compare that with vaccines, similar kind of story there, and again Youtube is particularly all over the place here but also in part because there’s lots of news stories that are being served by Google about Youtube, here. Whereas, should I get vaccinated, which is not really a very different question. Potentially you see the Google search and Google video very, very stable. Particularly at the top. Youtube, also quite stable at the top, and Google news a bit more news oriented, a bit more movable. So, that to us is also very interesting, that something as simple as vaccine versus should I get vaccinated, produces such different results in our data.

Moving on to something entirely different, just to give you some other ideas here, this is for for the keyword home loan, which is just incredibly boring, particularly in Google search, but all over the place, in Youtube. So, these are observations that we haven’t really had a chance to fully explain yet, but there’s some very different behaviours. Obviously as you can see  there, however if you search a mortgage broker, the story is slightly different, again. And much more variation there in search and video as well. Although again, in terms of themes and topics, there’s actually not that much difference. Mortgage broker also, by the way, breaks down very strongly by state, which home loan doesn’t let me – just very quickly go to another metric here, just to give you some other overviews. So, this here, you’re interested in the intraday variability. So, the variability in search results on the same day across all the users who were searching during this time. Again, there’s a bunch of limitations with which I’ll just scroll through quite quickly here, and how we calculate the variability, I think I’m not going to step through this one by one because it gets quite complicated, but again I’ll show you this on the next slide just very briefly. So, if you take the search results for a particular term on a particular day, you see here on the first column these are the actual URL’s that we see in our data, in our data for that search result, on that day. We’ve ordered them here by volume and this is for people who saw a list length of nine. So, nine results in their search. There is some variation in how many results you actually see. But if you take everyone who saw nine results in their search on that day further to a mortgage broker, these are the URL’s that they found and this is how often they appeared.

In total we’ve had about 2,260 results. We’re now interested in how many of these top results it takes to get to 80 of all the results being served. You see the very long tail distribution, so there’s a lot of search results that hardly ever were encountered by any of our users, but we have a bunch of them that that appeared very frequently. So, in this case we’ve counted back in essentially the first 31 results in our list of 250. So, of 2200 results are required to account for 80 of all search results encountered, right. So, that’s the calculation that we’re making. We’re comparing that 31 to what the ideal case would be if everyone saw the same results. So, 80 of a list of 9 is 7.2. So, you would normally – if everyone saw exactly the same results, you’d only need 7.2 or rounded up to 8 results, and everyone would have seen 80 of all the results, basically. In our data, in this case, we’re needing 31 though. So, doing a bit of calculation, on a scale from 0 to 1 variability, that’s a variability of 0.77. That’s the calculation that we’ve made for each of these search terms, and that’s then what they look like for Google search, news, video, and Youtube. And we’re seeing their per search terms, some very different behaviours. As you can see there. So, the lockdown vaccine covered in quarantine related results, show quite a substantial amount of variability on news and search for some but not all of them. On video, from most of them, on Youtube, as well, for quite a few of them. The results like mortgage broker, cash loan, cash advance and home loan work, very differently – oddly enough – across the platforms, but also across these different, but ultimately very similar kind of search terms. Searches for critical race theory, feminism, black lives matter. So, controversial topics that are part of culture wars at this point, all rank very low on Google search. Very little variation during the day, higher on Google news, Google video, and Youtube, and the average. Actually, if we take the average for all of these search terms, and I need to say, obviously that these search terms are not an even distribution across all possible searches that could be made, because there’s no way to calculate this. But across the search that we’re tracking, Google search and Google video have relatively limited variability with some breakouts, obviously as you can see there. Google news tends to be sort of very much oriented around a middle there, and Youtube is kind of middling as well. But again, with some very distinct bands in our data there in terms of how variable and how different these results are across users. So, we’re seeing some very different behaviours across these platforms, and that to us is a starting point towards investigating further what distinguishes these different search terms, what kind of variability we see. Whether that’s driven by particular demographics or any other factors that we might be able to imagine. So, just to finish off then, these are the sorts of patterns that at the moment we’re seeing.

Google search is actually quite stable and where it provides Google news, is because it’s news, very fast moving. Google video is often quite static. Youtube is often is often stable in the first few results and then there’s a lot of change and variability across different users down the track. So, there’s kind of limited evidence to a certain extent, of personalisation. Certainly for Google search, and there it’s largely driven by user location, critical topics such as vaccine, Covid, and so on, but also possibly some of the kind of culture war topics may be manually curated to a point to actually avoid some of that variability and fluctuation, and avoid different people getting different results about whether they should take a vaccine or not. Some variation is based on browser type, as well, and particularly Youtube is a really interesting case that just needs further investigation, which we’ll get to hopefully very soon. So, that’s for us. These are the next steps, more analysis per platform, across platforms, more breakdown about these attributes. Of course, we still haven’t even talked about organic results and that’s one of the non-organic results, and that’s one of the other things that we want to do, of course. And of course, we want to see what is actually being served and whether that is of good quality or not. So, beyond that we will continue to try and attract more users, a broader demographic profile, compensate for participant nutrition. Of course, we are able to vary our search terms over time as well, we’ll certainly do that when the federal election comes along and yeah, focus on particular events as they come up, as well. So, that hopefully gives you a reasonably good idea of where we’re at, at this point. Sorry there’s a lot of data and for those of you who are still here in the non-public afternoon tomorrow, we have another session where we can explore that a bit further, but that hopefully gives you a bit of an idea of the sort of data that we’re actually starting to see from this, which I think actually supports quite a few of the results that the Algorithm Watch study also showed. But hopefully we can take this a bit further still, as well. So, I’ll leave it there and hopefully we’ll have a bit of time for discussion still. Thank you.

Prof Jean Burgess:

Thank you so much Aatthias and Axel for that deep dive into these really important projects, and I suppose the broader kinds of questions they throw up. Both about what the extent to which search results are not only personalised, but curated, and for which topics. And in which ways and with what drivers – external and internal drivers. I suppose I’m not sure, we have a lot of, we’ve got a few more questions now. I have a few too, I suppose. Just to pick up where you left off, Axel, and to go back to what Matthias was presenting about – some of the limit the inevitable limitations of that early effort where we were looking at keywords associated with an election, we’re looking at Google news and so on, so not only are they kind of neutral terms, you would expect them to be like, well, election communication is quite heavily monitored and regulated and other social or cultural topics aren’t. But also, it’s taken a very long time I think, for policy actors, concerned citizens, to understand that Youtube is actually a pretty significant platform for communication and you know, our colleagues at data and society led by Dana Boyd were talking for quite a while about the data voids that emerge around emergent controversial virtual topics. So, I suppose that’s all to lead into asking whether – in which bits of this work can we start to connect what you’re seeing in the early results, with what we know platforms are actually doing in terms of their policies for intervening on the curation of results, and so on. So, for example, do we know that they are very careful about the top two results for searches? A broad question, I guess.

Prof Axel Bruns:

Well, I mean, one encouraging point I take away from the early discussion that we had also, was as Mark was saying, that some of the platforms are open at least to generally talk about the way that they shape their algorithms, and perhaps curate their results. So, having these results now, it might be time to start to talk to some of these platforms as well, and understand well, to what extent is there curation. Is there any, even in existing public statements, is there any sign that they might have engaged in curation? Of course, of these results, it is very obvious if you just lose Google. for Google Covid or vaccine or whatever, that this isn’t a normal search results page. But there’s a lot of manual labour that’s gone into it. But how are these pages chosen? How these topics chosen. For how long will that persist. Was it done with the question of vaccines for instance, before Covid, when there were already anti-vaxxers out there? I think those kinds of choices and those internal policies of when to intervene need to be further investigated. And I think hopefully, some of the platforms might actually be quite open to talking with us about this as well.

Prof Jean Burgess:

Matthias did you have anything to contribute on that particular point?

Matthias Spielkamp:

Just briefly, I think one of the – whoa I have an echo in my, sorry, that was irritating for a second. I don’t think you can do anything about this. So, the one thing that I like about this discussion and how it’s going is that more people just become aware of the fact that there are algorithms that are used to curate these search results. Because this is far from common knowledge and it’s not going to be common knowledge because of the experiments that we’re doing. But even there I would say there is an upshot to the conflict and the controversy about this, and to things like Facebook intervening and threatening researchers, and so on and so forth. That’s all of course not a good development, but at the same time, it gets the entire topic more attention. And It’s really needed because I guess we all agree that a big part of the equation here is also that people need to understand what platforms are and how they work, and in that sense, I’m an optimist. I think there is a change for at least some good in the sense that people become more aware that what they are seeing is not some neutral objective, whatever, sorting on these platforms. But that there is a lot of curation going on, and it’s important.

Prof Jean Burgess:

Thanks, Matthias. I guess I’d throw in, as well as engaging the public and engaging the companies as we’ve heard a bit in the previous session, people who are very actively involved as content creators and participants on these platforms are really engaged with these issues and often very knowledgeable about them. But perhaps there’s a bridging piece of work around some of the technical aspects that people like us, in our centre, can provide. We have a question from anonymous, which sounds ominous.

These platform oversight projects tend to focus on the very large dominant platforms. Does this mean that perhaps there are smaller platforms that we should be watching out for who escape our attention?

Prof Axel Bruns:

Well, there certainly are. I mean I guess the reason that we’re starting with these dominant platforms is because they’re dominant. And you know, I don’t know what the current market share is but a very substantial percentage of all searches will be done via Google in the first place. So, it has a lot more of an impact on the information environment that people live in, then something much smaller, some other platform. That doesn’t mean that we should ignore these platforms, but obviously if we’re interested in the extent to which the information flows in the information environment that we all live in is shaped by search and recommendation algorithms. And I think starting with the ones that are most prominent is a is sensible. However, if you look at particular communities or particular groups that might be using other platforms predominantly, then clearly yes, we should also have a very close look at how they work.

Matthias Spielkamp:

Yeah one quick comment on that as well, I mean the first obvious answer is yes. You know, the smaller ones are escaping this kind of scrutiny and for example with the legal implications here in Europe, the draft of the digital services act is talking about the vlogs you know, the very large online platforms. And they are the only ones who are then required to do things like a systemic risk assessment and this stuff. You know, it doesn’t apply to the smaller ones. Now there’s of course first of all, a very good reason for that. Because it is quite a substantial requirement and the compliance with art is difficult and expensive and you don’t want to give the large platforms another competitive advantage in the sense that, oh yeah, we can do it. We can throw our dozens of lawyers at this, but the smaller ones can’t. Which would be an – that no one would like to see – but then on the other hand there is a very substantial question behind this. And for example, the legal academic Matthias Cornils, who we worked with in the governing platforms project, has a lot of doubt about how this law is structured. Because he’s exactly arguing that if there is harm done, there shouldn’t be a difference between harm done on a small platform and harm done on a large platform, especially when you’re talking about many smaller platforms that you know, then taken together can have quite an impact. But it’s a conundrum you know, it’s really hard to solve and we need to think about how we can also address the smaller ones and what we can do about this. And at the same time, you know, as always balances with freedom of expression and assembly requirements or law rights that we also need to take into account here.

Prof Jean Burgess:

There’s a couple of questions that I’m going to try to combine here, one more simple one from Heather Ford who wants to know which types of normal organic results will we be looking at, or what might we do with those and somewhat relatedly, I also get that ancillary set of recommended searches, users also search for, what might we do with those? That’s from Simon.

Prof Axel Bruns:

Yeah, thank you for that and that’s I guess, one of the things that we’ve really tried to do with this version of the of the tool is to capture pretty much everything essentially, that we see on the results pages when the search is done. So, yeah. If you’re seeing – as for most searches you will – information on what people also searched for. If you’re seeing side boxes that for instance, if you search for a public figure you might see a side box with something drawn from their Wikipedia profile or whatever else, it might be – or a company you’ll see the same, if you’re searching for a current event you might see some other information – whatever, the Tokyo Olympics – you might have seen the latest middle telly or something. So, any and all of that, we are capturing at this point. What we will have to do is actually sift through that and see what our typical – if there is such a thing – additional non-organic bits of information. And of course, yeah, the other thing is promoted content, as well. Which will appear across a number of these platforms, not just in Google search. So, any and all that we’ve captured we’ve got to sift through now and work out what we can do with that. To be honest, because again there’s a lot of it is not necessarily very standard because it differs quite significantly across search results. But even that is also interesting. When do you and when don’t you get the sidebar boxes. When do you, when don’t you get recommendations for what people also searched for. What does that tell us about some of the internal decision making that might be going on at these platforms, that might be baked into ultimately the platform algorithms, of course as well.

Prof Jean Burgess:

I think that sort of relates to Mark’s question about what are the indicators of manual curation. So, it strikes me to be really useful if no one’s already done it, I don’t know. To kind of do an anatomy of the page of search results, what you might see where we think that comes from obviously trying not to reproduce a binary opposition between manual automated results and so on, I don’t know if you think that’s an interesting idea.

Prof Axel Bruns:

Absolutely, I think so. And of course, this is not something that stands still, but that evolves over time as well and particularly when there’s major events, there might be new ways of doing this again that might pop up that we haven’t seen before. So, yes, absolutely. I think that’s really of interest. One problem of course that we have with this, as we have with some of the comparisons I’ve shown there before, as well, is that we can’t possibly cover the entire breadth of possible things that people might be searching for. There’s just too many searches and too many different searches being done. So, we’ve tried to have at least a good range of topics from the mundane to the political, to the urgent, to the non-urgent – whatever else it might be. And we can only run so many search terms as well. We’re running about 40 search terms in a single session at this point, which is about as far as we can go without really bogging the user’s computer down. We’re also – that’s one thing I should also say. We’re limited, of course, also by what we can ethically run on someone else’s computer. So, we’re not going to use Qanon as a search term or you know, how can I fake my vaccination certificate as a search term or something like that, because we don’t want to make it appear like this user who’s very innocently come to us and said yes, I’m donating some search results to you. Now suddenly is Qanon non-adherent or q non-curious, at least. So, we have some limits to what we can search for. It would be very interesting to do a large study of what people see when they search for a controversial term like that, but we don’t want to put that in user search histories, obviously. So, yeah. But we’d love to have a really good breadth of terms that we’re searching for to answer some of those questions. Well in what areas in you know, on what topics do we see this embellishment, where don’t we see it.

Prof Jean Burgess:

Again, another question I suppose about our background knowledge, about what search personalisation, customisation results normally do – Thao wants to know – at what level does localisation appear to work country, state, suburb, location, thinking about implications for for a particular race, in particular, in local areas.

Prof Axel Bruns:

Very good question. To be honest at this point we’ve only looked on a state-by-state basis. In the demographic data that we capture we ask for postcode and of course some people might then just say 4000 or 2000 or whatever, to just give the state. Some people might be much more explicit and give the actual pulse score of where they live or work. So, at the moment we’ve just aggregated that by state, but clearly we can – with that data also – then look further and try and distinguish say, the southeast Queensland region from far north Queensland or whatever. Obviously with the limitation of how many actual users we have in any of these regions at some point, of course if you break it down too far you’ll end up with two or three users who are basically all of north Queensland and then we can’t make any reliable judgments from that kind of data anymore. So, yeah, I think some of the further breakdown into smaller areas, smaller than states, would be very useful if we can do it, realistically, with the kind of data that we have.

Prof Jean Burgess:

Matthias, do you want to add something to that?

Matthias Spielkamp:

Yeah definitely, because this was something that we were discussing quite intensively when we did our first data donation project. Because the question was you know, what kind of personalisation are we talking about, what kind of demographics are we talking about. And in a place like Europe, for example, I wouldn’t go as far as saying it could be a proxy for race, but in some sense of ethnicity that is definitely a case. And then – and a probability because if you have a more, let’s say international search that you’re talking about, is a certain person’s name or something like that, and then it is the personalisation is based on the geographic location and that is Serbia, in contrast to Bosnia-Herzegovina, you know. Something like that. Then this becomes a very interesting question. And we didn’t do that experiment. We didn’t run this, but this is something that is certainly – that needs to be explored more in areas of the world where this can play a very big role.

Prof Jean Burgess:

Time for the last one which is from Dan, but I think it’s a shared question. How do we encourage broader take-up of participation in these kinds of data donation citizen science projects, and where do we scale up these kinds of efforts, perhaps?

Prof Axel Bruns:

That’s always the challenge, I think you can build it but they won’t necessarily come. In the first place, they need to find out about it which is why we’ve done a lot of media and community outreach already, but we will very much continue to do so. Obviously, we will have this plug-in in the field for a year in total, so until mid-next year, and all through that time I’m hoping to do further outreach, make this really visible as a project. On ongoing basis, of course as we’re getting results, now as well we’ll do follow-up media work and reach out to community groups as well, to really try and make these results visible and thereby hopefully generate some further take up. But that is the big challenge, and that’s where Matthias, you talked about this idea of perhaps having a regular established panel that could participate, that’s representative. Ideally of course, that would also be great to have for these kinds of projects.

Matthias Spielkamp:

Yeah, I can only add to that, that this is the challenge that we are facing all the time and at the same time it’s one of these very good reasons why we say there needs to be more direct access to platforms, data, right. Because we can only always do so much and I think the experience in the one in Australia seems to be really successful in the sense of the number of data donors compared to the population, so congratulations on that. That’s pretty cool, but again, you know this is not sufficient and we need other kinds of access.

Prof Jean Burgess:

Well, I think as mother Abigail says in Stephen King’s epic plague novel, The Stand, there’s a storm coming. So, I think we have to wrap up. I cannot remember if I’m meant to say anything else in particular, Kathy? Except to thank all of our participants in the day, both here and abroad.

Prof Axel Bruns:

We have another session, thank you.

Prof Jean Burgess:

Do we? Yeah, oh, I’m not supposed to do anything.

Prof Axel Bruns:

I’m actually running the next session, so you can just hand it over to me if you like, actually.

Prof Jean Burgess:

So, thank you for your attention to this fabulous panel. Please thank our speakers. I’ll handoff. Thank you, Matthias. Thanks so much.

SEE ALSO