EVENT DETAILS

Rethinking Data in the Platform World: Alternative Data Governance // Alternative Data Economies
19 April 2021

Speakers:
Dr Jake Goldenfein, Associate Professor, University of Melbourne node
Prof Christine Parker, Chief Investigator, University of Melbourne node
Prof Katharina Pistor, Columbia Law School
Salomé Viljone, Cornell Tech and NYU Law School
Watch the recording
Duration: 1:29:54

TRANSCRIPT

Prof Christine Parker:

Welcome to the first in a series of four seminars on alternative data governance and alternative data economics. This seminar is hosted by the Australian Research Council Centre of Excellence for Automated Decision-Making and Society, in combination with the centre for artificial intelligence and digital ethics at Melbourne law school, the University of Melbourne, and Jeannie Patterson’s here representing them. And the humanising machine intelligence project at the Australian National University and Seth Laser.

My name is Christine Parker and I’m a professor at Melbourne Law School and a chief investigator in the Centre of Excellence for automated decision making and society, with a particular responsibility for the sustainable governance strands of the centre’s work. So, the centre of excellence for ADM+S does its work on lands throughout Australia and in the spirit of reconciliation, we as a centre, acknowledge the traditional custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders, past and present, and extend that respect to all aboriginal and Torres Strait Islander peoples. Today I’m currently sitting here at Melbourne law school which is located on the sovereign lands of the Kulin nation, and Melbourne law school is committed to building and supporting just relations between Aboriginal and Torres Strait Islander peoples. And the state and between their respective laws, legal traditions, and jurisprudence. And we acknowledge the enduring sovereignty and authority of indigenous law and legal systems. And we particularly acknowledge the role that law schools have played and continue to play in perpetuating colonial injustice, and we believe in, are committed to the transformative capacity of law.

Our work on artificial intelligence and automated decision making is one area where we seek to understand and inquire into more just and fair legal approaches and governance mechanisms, for the benefit of all. The centre of excellence for automated decision making in society has been funded by the Australian Research Council to help create the knowledge and strategies necessary for the responsible, ethical, and inclusive automated decision making in society. And this seminar series is one of the centres project, and mostly the brainchild of Dr Jake Goldenfein who I’ll introduce in a moment. We’ve created this series of four seminars to reinvigorate our thinking about the governance of data and digital economies by asking some fundamental questions about what data is and how it works in the digital economy. How does the law of organisations interact with and condition the digital economy, and how can regulation tackle and transform platforms as markets in and of themselves, not just as market players. And finally, how might data be repurposed as a tool for collective and democratic forms of governance and control.

Today’s seminar is concerned with data governance, and in two weeks time, we’ll have the next in our series of four seminars focusing on disciplining platforms as markets. The third will focus on organisations and intermediaries in the data economy, and the third on alternate, and the fourth on alternative forms of governance. Now before I introduce Jake who’s going to run the conversation today, a couple of logistics. So this seminar is formatted as a webinar, so only the panel are visible to you. The panel include our speakers and Jake and myself, and also some other luminaries from our hosting centres who’ve promised to look enthusiastic for our speakers, and to ask penetrating questions for the audience when it comes to question time. Please type in your questions via the Q and A function, which you’re probably familiar with on zoom, below at the bottom of the screen. And Jake will be responsible for reading out the most interesting ones, he says, and answering. And for those who, on the panel, who will become visible, I think during the Q and A you’ll be able to speak your questions so you can raise your hand and we’ll take a look at that and ask you. Okay, so finally, I’m delighted to introduce Dr Jake Goldenfein, a senior lecturer at Melbourne law school and an associate investigator of the ARC centre of excellence for automated decision making and society. Prior to his appointment at Melbourne law school, Jake completed a post-doctoral fellowship at Cornell Tech University, and he made it back to Melbourne to take up his position here during the pandemic in 2020 . Jake studies the regulation of surveillance law in cyber physical systems, the relationship between data science and legal theory, and he’s recently taught master’s subject for the first time on platform governance here at MLS. He’s a marvellous thinker and interlocutor on all things data governance as you’re about to find out. So, over to you, Jake.

Dr Jake Goldenfein:

Thanks so much, Christine. And thank you to everyone. I’m very excited that the day has finally come that we can kick this series off, it’s something we’ve been thinking about for a long time and it is my absolute pleasure to introduce the speakers for today, professor Katharina Pistor, professor of comparative law at Columbia law school, and she’s been writing in the fields of corporate governance, finance and law, and political economy for some time. Her 2019 book the code of capital was published by Princeton University press and in it professor Pistor describes the various modules by which law is used to code assets into different kinds of capital that can circulate in the market and in that book there is an inkling at the possibility that some of the features that law offers to assets such as durability and universality convertibility might be provided through technological means. In her more recent article rule by data: the end of markets, published in law and contemporary problems, professor Pistor continues musing on this possibility demonstrating how through different forms of control, data is used to generate revenues outside of the forms of coordination we typically associate with property rights and markets.

Our other panellist is Salome Viljole, currently a post-doctoral researcher split between the digital life initiative at Cornell Tech and New York law school, New York University Law School’s information law institute, and I think soon to be a faculty fellow at Columbia law school. Salome is a researcher at the intersection of legal theory, data governance, political economy and their relationship to inequality. Her recent smash hit instant classic paper democratic data I think is forthcoming in yale law journal, is that correct? Yep, right. And this explores the way in which law, so the way in which data encodes multiple kinds of social relations both vertical and horizontal. Those vertical ones being between a data subject and a data controller, and horizontal being between data subjects and others that share certain characteristics with them. And that paper discusses how these realities are something that our current regulatory strategies overlook. So, turning to the series now, I can’t really think of two better people to talk about this first theme in our series data, broadly speaking, that the seminar proceeds through four inevitably overlapping themes of data and markets, organizations, and governance at its heart. We are looking to re-orient the struggle of Dartmouth to produce a better and fairer digital economy and the point of the series is to identify and tease out the ways in which lawyers and others interested in data governance have typically been thinking about data, how that might miss something and in fact might miss something really important, the ways that the digital economy seems to be developing challenge, the notion that data is something that is simply given up or exchanged for access to a digital service, something that generates harm by undermining an individual’s capacity for self-presentation, or something that can be meaningfully regulated merely through transparency rights and the rights to rectify data. The goal here is to take on board what we can of the positive developments that continue to emerge in the regulatory world while also recognising that regulatory efforts premised on greater individual control over personal data might not actually achieve the type of control that really matters. So, the questions framing this conversation include how might data governance take into account the real workings of the digital economy, and how might data governance take into account the reality that data is co-created and co-constituted through platforms through law, through personal interactions, and how can we understand data accordingly as a social technical and a legal artifact rather than just a sort of naturally existing resource or part of an individual’s personality.

So, we hope to incorporate some of these conceptualisations of data and find some of its characteristics which are most significant to the digital economy, and use that thinking to reinvigorate regulatory strategies. So, both speakers today have recently produced articles offering, I think, extremely insightful diagnoses of the problems of existing conceptualisations of data that inform governance and regulatory strategies and I think both authors very lucidly explain why the kinds of control over data that have been the goal of existing regulatory ideas be they premised on human rights, economic rights, or market organisation, do not necessarily afford the type of control that matters. As we were saying before, while both authors describe the problems of thinking about data as a discrete object or asset that can be governed through property rights, there are also some meaningful differences in the way these authors conceptualise data’s primary characteristics and functions. And indeed these differences lead to somewhat different diagnosis of the pathologies of the data economy as well as some different solutions. So, I’m going to start with Professor Pistor. I love your paper rule by data, for giving such a coherent and sensible account of the relationship between data and control, the way that it can be used as an instrument for governance in multiple different ways. So, this includes on one hand the way that data works to control and govern consumer behaviour without so much focusing on the need for manipulation, as well as on the other hand the way that data facilitates its economic function, data’s economic function, through sort of challenging arrangements that are built into property rights and market-like forms of coordination. I’m hoping you could explain a little more to our audience what exactly you mean by data as a source for and means of control, and perhaps talk about the kind of power that data confers on data controllers?

Professor Katharina Pistor:

Yeah. First of all, let me say thank you very much for having me. I’m delighted to be with you at least via zoom. I would love to come to Australia. I have never been, but here we go. So you know, I think something that has struck me also working on my recent book that you mentioned, is that we have a tendency to reify everything and to use our current framing of things to also explain new things, right. So, land is sort of the object that we have always sort of thought of as property rights and then we have used the same kind of mechanisms to think about financial assets, right. And there’s a very different thing. It’s not land, it’s not physical, in fact it owes its very existence to legal arrangements itself. It is a promise to future payment that is dressed up as an enforceable claim. Intellectual property rights. Similarly, we just have knowledge that we basically give some a license to monopolise it for a period of time. We call it an intellectual property right and we’re seeing the same framing story going on with data, and to be quite frank with you, when after I finished my book and I turned to the data issue, I also thought about maybe you know the assetisation, the property rights thing, is how we can capture this. And then the question is, is it mostly an economic right? Is it a human property right following as work? And I ended up basically saying you know, it’s neither data, is something different. And I collaborated, or still collaborate with a colleague, we try to figure out how different disciplines define data, right. And every discipline brings its own baggage with it. But none really capture what this is all about. Technological literature says basically data is information on a device. Well that’s not very helpful. I think what we’re talking about here is also not just any data, but it’s the behavioural data of individuals in association with others. It’s really an ecology of data of interaction of people that we digitise in some ways. That’s the data that we’re talking about. You can have all kinds of data about other things, but I think it’s important to clarify that. And then the question is, why do we do this and for what purpose do we do this? And that’s where I basically say yes, of course, it’s also for monetary gains, but it works not through market exchange, it works through control. It’s basically trying to understand what people do and thereby develop predictive power about what they might do in the future and monetising that knowledge against them.

Dr Jake Goldenfein:

Okay. Thank you very much. Salome, your description of what data does in the digital world is related but it also takes on a somewhat different inflection. Your paper discusses how the existing regulatory focus on the individual is unable to take account of what you call the population level effects of data production and processing, and this is because it fails to take into account data’s relationality. Can you tell us what you mean or give us more information about what you mean about data relations, and in particular, the idea of horizontal data relations.

Salome Viljone:

Sure, yeah. So, I think similarly to Katharina, when you start to think about your first instinct as a legal scholar is to analogize and having gone through that own process for myself, nothing seemed quite right. So, this paper is a long process of me kind of going back to my first instincts as a political economist and sort of a political philosopher to think about what to me, what struck me as politically, morally, and perhaps legally, ultimately relevant about information production under informational capitalism. And that really led me to approach data as a materialised social relation and you know, law is quite good at thinking about distributing rights, duties and privileges between sort of – in the case of data subjects and data processors. But to me, this really missed a key aspect of how data actually functions as a commodity in informational capitalism. What makes it valuable is not just what it reveals about me as a data subject, but what data about me reveals about people like me. So, that’s sort of the computational input for behavioural analysis, and behavioural change is on the basis of these relevant category features that I can sort of contribute to refining as predictive mechanisms, and that puts me into a relationship with people on the basis of this sort of classificatory impulse. And what I’m really interested in or what the piece sort of tries to explore is how law sets the terms of that relation. And you know how that raises things that are for me as someone interested in kind of egalitarian interventions, you know, issues of political and moral concern, and they’re again really focusing on those horizontal relations between myself and the others that I’m sort of placed in classificatory relation with focusing on kind of the quality of those relations as of social and legal concern.

Dr Jake Goldenfein:

Great. So, one of the ways in which you, Salome, talk about this in the paper, is this idea of population level interests at stake in data production. Could you explain a little more, what you mean by that?

Salome Viljone:

Yeah. So, what I mean is that insofar as we think people have an interest in information production at all, which I believe that we do, a lot of those interests – I would say an overwhelming majority of those interests reside at the population level. So, in so far as we think that there are something legally relevant about information being collected about me, that relevance resides at kind of that population level where that classificatory act is happening. It doesn’t reduce to the individual transaction that I have with a data processor and I’m happy to provide a quick example.

Dr Jake Goldenfein:

Yeah, maybe a quick example?

Salome Viljone:

Yeah, this can get like very abstract very quickly. So, you can take information that seems extremely personal – so let’s say that I’m using a fertility tracker app and it lets me know that there’s a good likelihood that I’m in the first trimester of a pregnancy. We can really think okay, I should not you know, an employer of mine should not have access to that information. They shouldn’t be able to use that information against me or sort of gain insight into that information. And so you can think okay, we really have to give me a lot of control so that I can consent to say, no one can get this information except the fertility tracker and if the fertility tracker discloses that information about my first trimester pregnancy to my employer I might have a legal right against them for doing that. But if I have a pernicious employer, they don’t need to get my first trimester pregnancy data. They can find sort of all of these other ways in which I’m similar to other millennial women in southeast Michigan, and my browsing habits and make a 95 percent prediction on the basis of this relationship that I’m in, that I’m pregnant, and act on me, anyway. So, my interest is at that level of like first trimester pregnancy data as a woman millennial in southeast Michigan, that’s a population interest I have in that information production process as a whole.

Dr Jake Goldenfein:

Very clear, thank you. Both of you have discussed the utility and this utility of using metaphors to talk about data making the point that it’s not just another tangible, finite, fungible asset in particular, in that its value doesn’t necessarily come from use or exchange.

Professor Pistor, perhaps you could talk about some of the characteristics of data that differentiate it from other assets? And you sort of poked at this a little bit before but i’m interested in if you could draw this out a little bit more, because we seem to still be in this world where property rights and data are in a confused entangled relationship, and perhaps thinking about these metaphors might help us unpack that.

Prof Katharina Pistor:

Yeah, so I think property rights is widely used but it’s also widely used to say that data is not property rights. So, most courts in the United States have basically denied claimants rights against data breach or using their data on property rights grounds because they basically say these – data in the non-aggregate fashion are not of economic value to do to you, so you can’t have a property right in them, essentially. So, it’s actually interesting that our imagination, we treat them as property and when you look to some of the regulations like the European GDPR, it also has a very strong property rights flavour to it, by sort of dividing personal data and non-personal data and then giving you consent. But in truth it’s actually not property rights. And the same of course is true for economics, right. It’s non-rivalrous, it’s not scarce. Okay, so it really defies most of the assumptions of goods that we treat in sort of normal economic models as assets and none of them fits data. You can have them, they’re ubiquitous, they’re reproduced all the time, they’re certainly non-rivalrous, so I think that sort of drove us in the collaboration with copyright with whom I’m working on this to really probe a little bit deeper, also into what data, how we might think about data. It’s not what it is but what function it plays in a particular context, and therefore why we should govern it in a certain way or not.

Dr Jake Goldenfein:

So, to continue on from there can you offer us some sort of consequences or some insights for how we should think about data governance, premised on the idea that data’s value comes from its predictive power rather than its exchange value? And just as a side question, I wonder if this is unique in an economic system. Is data a sort of unique form of quasi capital in this way?

Prof Katharina Pistor:

Well yeah, it is and it is not. I mean it’s really about information in a particular form, right. We’ve always collected information about others and we also have used information against people. The entire advertisement industry is already based on things like that. I think the scalability and the computational power thrown at it, that sort of gives it a different type of quality that we should be concerned with because it does create much larger asymmetries between different partners or different actors in an economic world, and it also has quite pernicious effects on manipulating people’s behaviour. So, it has this performativity aspect that sociologists like to talk about. It’s not only that data is taken from us but data is fed back into us and not only individually but into our networks and create certain outcomes and that, I think, amounts to a level of control by those who have access to the data. And the individual, either producers of the data, or those who are at the other end where data are used against them or to manipulate their behaviour, have no clue really where the data comes from. I think Salome’s example was really quite brilliant. Right now you don’t really know what kind of information is being put into the equations and picked up by algorithms that tells other people what is relevant about your behaviour, but they can exploit it for their own end. And that I think destroys any kind of metaphor of a level playing market place which has never really existed in any way, but is clearly not present in this kind of very asymmetric hierarchical relationship.

Dr Jake Goldenfein:

Salome, your work similarly challenges certain mainstream conceptualisations of data as object-like, as well as person-like, getting to this sort of other way in which data is conceptualised in its relationship to an individual, and you talk about how these ideas continue to inform data governance law. Could you elaborate a little bit on what you mean by those two conceptualisations?

Salome Viljole:

Yeah, my cheeky short response to this is that on the one hand you have people who are trying to apply Nozick, and then on the other hand you have people who are trying to apply Kant. And I’m sort of interested in pushing people to Hegel. And I’ll explain a little bit what I mean by that. So, on the object side again, I mean I think this is very much that classic idea that people have about property, that my data is like an apple from my orchard and people are climbing over the fence and stealing apples from my orchard, and that’s unfair, and that’s like a claim of unjust enrichment. They should pay me for their apples or I can decide who I want to sell my apples to. It’s not like that. I also think this sort of relationality and the sort of population kind of classificatory computational political economy that structures data production is also I think, relevant for pushing against the other impulse, which is to say data is just an extension of myself and so you know, I have my data double out there. And she’s being exploited or manipulated or undermined in these key ways, and we need to sort of shore up my sphere of personhood around the Salome data doubles that are out there, so that all of the sort of traditional human rights that we would extend to Salome also apply to all of the Salome data doubles. But as I sort of intimated in my earlier answer you know, I think a lot of what makes data about Salome economically valuable and potentially socially harmful is not really what it has to say about me as an individual, it’s what it has to say about people like me that can get exploited and used against them. And so really I think the relevant task is sort of thinking about okay, what are sort of our mutual obligations to one another with respect to those conditions. And with respect to this type of relationships that we put into one, in two, with one another on the basis of these computational processes. And that’s not really a question about my personal rights, those are questions of sort of political ordering, and how we sort of want to mutually govern one another. Yeah, so I’ll leave it there.

Dr Jake Goldenfein:

Well, I suppose that sort of leads us into some of the differences in the diagnosis of the pathologies of the digital economy that both of you focus on in your different work, although I think that in both of your writing, you see some crossover in terms of what is actually at stake. Although these particular pieces focus on particular issues. So professor Pistor focuses on, in rule by data at least, the sort of disadvantaging of consumers vis-a-vis sellers, which I think is a proxy for a much broader set of conditions. Whereas Salome, you focus on problems of inequality. So, perhaps as it pertains so centrally to the idea of data as a relational artifact, can you tell us a little bit more about this particular pathology, Salome, of inequality in data relations?

Salome Viljole:

Yeah, so I’m sort of centrally interested in my piece of thinking about okay, if we take my sort of conceptual conceit, that what we’re really doing is we’re sort of structuring a series of social relations via these data flows, then one could diagnose the injustice or the potential injustice of data relations as materialising unjust social relations. And so those unjust social relations could be relations of exploitation, but they can also – and you know, particularly relevant for the sort of classificatory processes of computation – really have to do with the way that we structure social inequality. By which I really mean, you know, if you sort of take the social constructivist idea seriously that part of how you maybe do the classification of women in ways that are socially relevant and that are potentially oppressive is materialised in the data relations that are like my first trimester pregnancy data, and how that’s being used and used against me, is materialising the socially harmful condition of being a woman. And with a first trimester pregnancy in a country where if I decided to terminate that pregnancy, I could face serious danger or serious harm. And so if we think that that’s potentially a social situation of inequality, part of how that inequality is actually materialised in my life today is via this data flow. And so we’ve always had processes of social inequality, they have previously mostly been analogy. The more we live in a digital society, the more those unjust social relations are materialised digitally via data flows, and that’s kind of centrally the project that I’m trying to at least sort of sketch out in this paper and say okay, well if we can have unjust materialised social relations via data flows, we can also potentially have just social relations materialised via data flows. And that’s kind of where I leave things for now.

Dr Jake Goldenfein:

Professor Pistor, as I mentioned before, in your recent article you focus on pathologies associated with the relationships between consumers and sellers but would it be right to say that that’s sort of a stand-in for understanding how these kinds of infrastructures and control over data flows position different groups? Not just in the context of the consumer internet, but in other digital contexts as well?

Prof Katharina Pistor:

Yeah, no absolutely. I mean I should say that the paper was published as part of a special issue in the legal construction of markets, and so since I had just published my book on the legal construction of capitalism which of course also then throws out a lot of the basic assumptions – these are free market – no, there are no free markets. These are legally constructed power relations, right, and we do as if there were markets. And when you turn to data, then you look at this and say the economists now have trouble sort of really explaining it. So, they’re not saying it’s a market, it’s a two-sided market. Now a two-sided market is a market. It’s a lot of market. You look at the underlying relations and saying you know, they basically gain this predictive power over individuals and entire social groups, and then sell the access to that without having compensated or even asked for systematic consent, unless they were absolutely required to from the consumers. That’s actually basically a control relation. I’m trying to get this information which I then can sell for others who use this to get a competitive advantage, and to push the consumers to buy their products or something else. So, it’s basically just a proxy if you like, this particular relationship for the amount of power that can be exerted over entire societies and groups with the power of data at a larger scale. I think again, you know, we can discuss that they’re just a quantitative or qualitative difference between earlier dates in periods. I think it’s qualitative, given the computational power that you can throw at it and given the monetisability of this huge access to data that is built into the system.

Dr Jake Goldenfein:

Thank you. You’ve worked extensively on the ways in which law facilitates turning assets into capital, the way that law codes the world to make it transactable and stable and productive in terms of future returns, and we’re presently seeing a lot of efforts I think, in the sort of commercial technology space to push for assetisation of data. Not actually just using property rights but other technological mechanisms, perhaps to replicate the exclusionary power of property rights. And for instance in blockchain systems, which is something you talk about in your book, the way you talk about data in the rule by data article seems related, but the structures by which it becomes a form of pseudo capital are somewhat different, in that it’s not about producing a non-duplicable and excludable sort of token as a representation but rather it’s an infrastructure that focuses on data flows. So, could you perhaps talk a little bit more about your argument, that future returns on data might be guaranteed through technical infrastructures as much as, or even rather than, legal infrastructure.

Prof Katharina Pistor:

Yeah, so one of the, I think critical questions that I’ve been asking myself is whether the data organisation and the data governance can be dislodged from a legal structure. And in the book, I still say no, and in the paper that you mentioned I’m a little bit more open to the possibility. And I’ve also written other things in between. So, you know, for the legal system, we’re relying very much on state power to back private claim and to use particular legal institutions to invoke that state power, that you can make it enforceable in the data organisation. There is a hybridity as well, between the technology and state governance. If you go back to the 1980s and you look at the computer anti-fraud act in the United States, it basically says, once you have aggregated data – so maybe the data don’t belong to anybody, they’re like the wild animals and the roman law you capture them – but once you’ve put them on a physical device, if somebody comes and takes them, that’s hacking and that will be treated like theft. So, there’s on the one hand, we put this on the device that is sort of the technological thing, and then the law actually backs this up and says once it’s there if you try to get it from them it is going to be treated as a theft. Now of course technological change moves on and we don’t need the physical device anymore, we’re on the cloud now so these things do change. And I think the key for the data monopolisers also, is to make sure that they have technological means to ensure that not everybody has equal access to the data. And in part, I think we also see this enormous scaling of data that nobody really uses to the full extent, right. We use fractions of the data for analysis and computation, but it’s basically pretending that the scale we have the greatest data by data set, and if you buy access from us to that data set you have these enormous comparative advantages.

So, it’s impart posture, but it’s in part also exploiting technological means, and then maybe have the law just basically added on. It used to be the device, now with trade secrecy data, as long as you can keep it secret with technological means, you can invoke the law and then the state again to protect it as well. So, I think we’re still very much in the hybrid world. But I think we have to watch that space because in many dimensions the data technology is less reliant on state power. Just think about contracting. We can have contracts among between Facebook and 2.5 billion users, but just by clicking agree. So the scalability of contracts around a central figure, a central actor, basically creates something akin to a state backing certain property rights. So, I think if you think in those terms, then you see perils. But the way in which you achieve power and control works through slightly different mechanisms in the legal world as we knew it.

Dr Jake Goldenfein:

Just to follow on from that, do you see the potential for these technological systems to replicate sort of non-state jurisdictions, even things like international investment law in this context? You know, for instance how Toibner talks about your global law without a state, what you need is a contract and some sort of enforcement capability. And I think that so much of the excitement behind smart contracts was that we have mechanisms for automated enforcement now. And I wonder if that plays into your calculus in any way.

Prof Katharina Pistor:

In part yes, but I think we also have to realise that you know, in the real world we still have something like a fundamental uncertainty, and we just don’t know. So, smart contracts go only that far. Although I think with artificial intelligence improving, we might actually have automatic updating there as well. So, there is a little bit of that. But you know, I wouldn’t sort of rely entirely on smart contracts in that report. And I suppose even those international jurisdictions still rely on like the New York convention to tie agreements and enforcement back to states one way or another, correct. On the other hand, I think you know what has given me pass was something like Facebook announcing the libra and the prospect of having a global currency. Now the way it was structured was it was piggybacking again on state structures, so these things always have rooted somewhere in the states that we know. But my question at some point was whether libra could unpack from state fiat currency just like the dollar, at some point unpacked from gold. And so the next thought that I had, okay, and I wrote a little paper on statement in the digital age about this in constellations, that well you know, Facebook and other big data companies are effectively taxing us already because they take something from us which they can monetise. That is a kind of tax. And with the amount of data that they get and the level of monetisation which is increasing, it is not unthinkable that they could unpack from a state currency and say just with the data power, that we have could backstop a currency. And then you have a new form of sovereignty. There’s still some physical footprint here but you know, you can also hire an army or you can hire your socks and you can hire a physical plant where you store your data. The question is what is the dominant relationship? And we’ve lived for the last couple of centuries in the world where we said it’s actually the territorial nation states that call the shots, you know. That might be changing. I wouldn’t sort of think that this is forever brilliant.

Dr Jake Goldenfein:

So, I suppose then that this analogy, this analysis really, is pushing home the idea that regulatory strategies premised on market behaviour or market failure, that better property rights or better privacy rights, or even addressing market dominance as mechanisms for remedying data economy, trying to make this world look more like a market one way or another. These are all doomed to failure, is that a fair extrapolation of the sort of tendencies and trajectories that you’re describing?

Prof Katharina Pistor:

I think we could gain some breathing space. So you know, two of my colleagues are joining the bidden administration I mean Lena has to be confirmed first, and they have worked very hard on thinking about antitrust remedies, and I think we would benefit from them because it would give us some breathing space. But what we really have to think about is what kind of data governance we would want. And I think there, I’m not a technophobe, I think there is enormous potential in which we could use this – call it a resource called the potential right to deal with, I don’t know, do public health research and make sure that we give them, you know, organise us in a way that is more peaceful that we sort of respond more quickly to natural disasters, deal with climate. I mean all kinds of things we could do in a positive way. And yet, we’ve basically allowed it to evolve in a way that some can use it to enormous financial gains, and most others are the data producers with little if any say of how we could actually really use this in a self-governing fashion.

Dr Jake Goldenfein:

So, this for me, connects to what is a growing literature about the ways in which platforms, or the ways that platforms operate, challenge the sort of mythology of the Hayekian victory in the socialist calculation debate, that price as discovered through repeat transactions is the unit of account that best coordinates production. Do you see data and the data economy challenging this sort of settled position?

Prof Katharina Pistor:

Well, you know, I think other things challenge that already. So yeah, it’s data in particular, because it just makes so visible the deep structures that we have and you know, so again, is it the quality of a quantitative shift. But it’s not a complete audio, it’s not completely different from anything else, it’s just on steroids. It’s right – it’s basically hypertrophy of what we’ve seen before. And so yes, it does challenge that position.

Dr Jake Goldenfein:

Salome, I know this is a question that interests you as well, do you have any thoughts about the way in which data is used for instance, to price our attention? Does that shift our understanding of how markets ought to work or how they work? Or what a market is even, or whether we need them?

Salome Viljole:

Yeah, I mean, I think I would agree more that when you start to look at – and I really like the way Katharina put it as like you know, instead of exchange value prediction value, that information has – it really shows the lie of this like price, is this just distributed information process and markets it’s just these natural things that allocate on the basis of these distributed information products. Yeah, so i mean i think it certainly uh gives the lie to that are you trying to prompt me to say that i think we should abolish markets up to you what are you what are you trying to prompt me to say no you don’t you don’t go so far i don’t know if i’ll go quite that far but um yeah you know i think they certainly trouble a lot of the ways in which um we think of markets as sort of the primary allocative mechanism that we just default to and i know in a lot of the work that i do with you is thinking about um what in conversation i’ve referred to as market machines unsettle a lot of the kind of ways that we’ve just sort of deferred to markets to allocate a lot of you know sort of socially vital um you know goods and services um

Dr Jake Goldenfein:

Thank you so much. So, I’d like to then move on to how we might successfully manage or better manage data production. Both of our speakers have discussed the need for governance mechanisms that collectivise power in one way or another, in order to decide the purposes for which data production and data processing proceed. Professor Pistor has suggested altering the organisational structure of data collection and processing, and Salome, you talk about altering the institutional structures of data production. So I was hoping that you could both expand on these ideas? Perhaps Professor Pistor you’d like to go first?

Prof Katharina Pistor:

Yeah, so in the paper that I published last August, I was basically trying to find a fix for the particular problem that I identified, is the relationship between data producers and consumers, and then the data controllers with the clients and brokers in the mix. And so, clearly once you depart from property rights and contracts, we still think in old categories. So, the next category that we pull out of the draw is organisational law and I happen to teach corporate law, and of course I’m familiar with trust law and you know, in constitutional law I always tell my students corporate law is the constitutional law for the private sector. So, we think organisations. So, how do you allocate control rights, participatory rights, maybe also economic rights, to a common shared pool of resources, right? That’s basically what we’re trying to do. And so I felt that if we are in the world that we currently are, and you assume that there is Google and Facebook, and we don’t break it up and start from somewhere completely different, then how could we enhance the position of the people who are currently being used as producers and very often also are the consumers against whom the data are being used, but not necessarily. And so the idea was to say we create something like a trust. So, if my own data doesn’t have any economic value, the aggregate data does have economic value and there’s nothing to prevent us from saying well maybe all data producers should be beneficiaries of a share of the income that these data produced. And you could again, you could hire the service providers to clean the data, organise them, build the algorithms. It’s not really clear why the data producers have to be excluded from any economic gain here, and at the same time you could then also give them decision making rights because with billions, millions of billions of users, what you’ll do is you don’t have them participate in a direct fashion but you have some kind of a representative structure.

So you use a trustee for example, that is bound by a particular contract that has to be implemented, monitored and then negotiated with the other sides. Whether it’s the controller in the current version, but who doesn’t have full control anymore because has to negotiate the extent to which they might have control. So, that’s the basic idea. Others have come out with ideas about trust structures as this hook has written a paper which is on ssrn.com, about the public trust at the municipal level for example. And Napoli has written about using public trust structure. So, this is again like a conventional tool where we say this is sort of one way in which we have dealt with you know, predicting, protecting natural resources, maybe data, a little closer to that. And so we can use the structures. They probably don’t resolve all the issues but it’s just basically one of the default solutions that came to mind for this particular problem that I identified.

Dr Jake Goldenfein:

Thank you. Yeah I’ve always thought that the move to intermediaries is a fascinating one but I’ve also always been sort of challenged by this idea that when you position the sort of consumer or data subject as also the ultimate beneficiary of the data processing that happens in the consumer internet, that might produce a sort of contradictory set of incentives, in that all of a sudden the consumer is incentivized to maximally exploit their own behavioural data in that context. And I wonder if that’s a misplaced anxiety or whether there’s sort of an organisational way around that?

Prof Katharina Pistor:

You know, I don’t think it’s a misplaced anxiety. I can see that, and clearly there’s also, there are structural issues here as well. Sort of who has the time to think about this very seriously, and to make decisions about how data – either collective data, individual data, should be used. And if it’s just the rumour narration, then some people might just want to maximize the remuneration and don’t care about the particular use, right. So, you have to, this has to be preceded by some kind of a contractual feature where we basically say these kind of data yes, and other kind of data no. This particular usages will be underwritten, others will not be underwritten. And so basically you’re using the intermediary as a trustee that is bound to that collective contract about what we want to do with these data, and then we will deal with the remuneration. But that is of course an idealised version of how you would get to this kind of new social contract.

Dr Jake Goldenfein:

Well, I mean I’m very interested in the idea of tying into intermediaries to purposes and that’s something that I’ve been working on with others, absolutely. Salome, you were talking, sorry, you’ve talked about altering the institutional structure of data production. Could you talk a little bit more about that?

Salome Viljole:

Yeah, so again, I think if you take seriously the idea that a lot of our interests and information do actually reside this population level, and implicate not just sort of my individual dignitaries interests and rights, but also kind of implicate the ways that social inequality is constructed and reproduced, This to me really – what follows from that is that a great deal of information should be thought of as public infrastructure, and subject to sort of public forms of governance. And so you know, that might take the form of a public trust. I have followed the data trust scene and world for a number of years now and again, they have all the benefits and drawbacks of flexible legal instruments which means that they’re as a solution, highly underdetermined. So, they can be great. They can also have shortcomings. I think a shortcoming of the trust model that I imagine is a little harder to overcome is again, for me as someone interested in those horizontal relations, if you think of the data subjects as sort of putting their input data into the trust, that still doesn’t really solve the problem of protecting the interests of people against whom or about whom that sort of information may still be used. They sort of reside outside of that trust, some of that might be resolved by like a public trust, municipal trust mechanism.

So, of the trust models, I do tend to favour a public trust model. But really, I think generally taking seriously this idea of information about us as a polity as something that is irreducibly sort of public and political and there, I think I’m frankly interested in not – similarly, I’m not a technophobe – I’m interested in equalising data relations, not abolishing data relations. And I think that taking that seriously as sort of the normative project, just doesn’t abolish information collection. It just very much ideally and sort of speaking in ideals, would change the kind of information that we’re collecting. I think if we take the full suite of population interests into account, we’re probably collecting a lot less information about everyone’s shoe purchasing habits, and probably collecting a lot more information about everyone’s CO2 emissions and their water drinking habits. And I’ll sort of focus on the climate change thing, that’s like a foundational reason that I’m not a technophobe. I think the project of feeding, clothing, housing, and educating seven billion people, not just to the level of subsistence but to the point of flourishing as we enter into a world of climate crisis, is going to require data infrastructures. It’s going to require technological infrastructures. And so for me, thinking about our theories of what makes datafication wrongful, or what makes notification permissible, we have to keep that in mind. And we can’t articulate it as the sort of negative individual right that I have against others, or again even against evil Facebook company or evil Amazon, right. We really have to think, theorise within information production, not against information production.

Dr Jake Goldenfein:

I think that’s a wonderful space to pause this segment of our conversation, unless our speakers have any final words? We might open up to some questions. First from our small panel that we have here, and then we have some attendees who have been typing questions into the Q and A chat function. And I think hopefully we’ll have time to move on to that as well. So, I’d like to firstly thank you both so much, and now open up to the panel, if anybody has a question they’d like to ask.

Jeannie?

Prof Jeannie Patterson:

Thanks Jake. Sorry, I couldn’t find that button, so I think you’ve got something over Seth and Kim. But anyway, I will ask a quick question which is to both of the panellists. First of all thank you for your conversation with Seth- sorry with Jake. I thought it was fantastically interesting and really drew out some of the nuances and detail of your papers, so thank you very much.

My question, unsurprisingly given I’m a private lawyer, goes to trustees and that model. I know there’s a lot of interest in sort of the trust fiduciary model, which I think both of you have said goes precisely to the fact that we as the law legal system works an analogy, and we kind of can’t think of another analogy I suspect, for what we do to control, collect, you know, protect the interests of a group of people who don’t have the time or interest to do that themselves. I’m interested in your thoughts on the extent to which actually the trustee model can overcome the problem what was raised about self-interested behaviour by short-term self-interest in behaviour by consumers, which means monetising data when that is in perhaps their long-term interests, or indeed the long-term interest of the group. That being an important interest, do you think the ideas of perhaps trustees or fiduciaries – although their analogies have a merit, because generally what we ask for in fiduciaries and trustees is to think about interests that are greater than short-term self-interest of the individual. Is that perhaps the attraction of those models? That we’ve got a vision, a horizon looking element in those models?

Prof Katharina Pistor:

Yeah I think that’s part of it. It’s also that we are trying to bind them to a particular issue, right. It’s hard to break the trust deed. I mean they can be wiggling around this in some ways but somehow I think the idea is that unlike a corporation or business where you have the fiduciaries themselves, also being the ones who are now in the driver’s seat trying to do the stuff is with the trustee model. Your binding their hands a little bit more. That can also be cost, right. Because you want to be flexile in adapting this to the future. So there’s a clear trade-off here and I don’t think by any stretch of the imagination it’s an ideal solution. What we need clearly, some way to overcome collective action problems and to find consensus. And some agents who will carry the task of making sure that then first of all, this consensus is implemented, but also that then the other service providers, data controllers, whoever is in this ecology, will play by the rules that are set forth in this contract. And that of course raises very familiar issues. So, I don’t think we get out of that conundrum at all. It might still be superior to what we have.

Salome Viljole:

Yeah, no, I think I would echo a lot of those same points. I mean I take the point that a well-constructed trust could perhaps prevent some of the more just people looking to maximize individual revenue on the basis of monetising their data as much as they possibly can. But I don’t think it necessarily solves kind of, the almost sort of fundamental information asymmetries that come along with something like data production. You really have to kind of have both strong technical and legal protections in place to make sure that those sort of – appropriate as I would say – the appropriate population level, consensus, is what’s being kind of honoured and effectuated by the trustee. Those problems persist.

Dr Jake Goldenfein:

Kim?

Prof Kim Weatherall:

Thanks Jake, and thank you so much for the conversations and the papers that preceded them because this has been utterly fascinating. But listening to the conversation now, it struck me that we sit here, you know, we start out the conversation saying no one really is able to define what data is satisfactorily. So, we start reaching for these other concepts. Concepts like power, concepts like relations, relations between people, and when it comes down to it I’m just wondering whether the focus on data is itself that useful. Do we just need to stop talking about data, because data after scans, right, it is a quite different matter talking about shoe purchase habits as you say, versus carbon credits. It struck me as I’m listening to Salome, there’s that question of fertility tracker, and you’re in your first trimester. It’s not the fact that they have the data about that, it’s that that’s been used against you. It’s not the data, it’s the pregnancy and the implications of someone using that pregnancy against you in some way. Say to fire you or get rid of you in some way. And so I was thinking the whole time, why is it we keep talking about data? Why is it – and I mean ironically, I think you captured it Katharina, when you said that it’s almost like that the data materialises the relations. It gives us something to focus on when we’re actually talking about these bigger problems like power, like inequality, like carbon emissions, etc, that you know, ironically – data is immaterial, but it’s the fact that it materialises in the device. Some of these relations that we get to talk about. But you know, I still come back to that question, is it useful to talk about data or is it – should we be talking about the relations between people, the acts that are wrongful, not the data collection, the data processing, but the things that people are doing with data that are wrong or that are disadvantaging people, or increasing inequality.

Prof Katharina Pistor:

You know, I think this is a double-edged sword. On the one hand we’re basically conceding the point that of course the data monetisers, or the data assetisers have made – it’s basically saying data is a new oil and that is being used as a legitimising function, to say we should exploit it and everybody should have the possibility to do this. And if you take this away, that’s really economically very harmful. So, you make that concession when you’re treated as assets, but because they have done that, and as assets I think we also have to then create an alternative image and I think that’s part of what I was trying to do in my paper, basically saying it’s actually this control, it’s ruled by data. We’re using the data to actually exert control over others and that is the end of market. So, my hope was basically to get across the message that if we continue to do what we’re doing, we’re actually undermining the very structures on which the argument that it is an asset and that it can be assetised in this fashion, creates. So, I think I agree with you in principle that we should move certainly beyond data, but it’s a particular way in which information has been – it can be captured and computational power can be thrown at it that in this combination is the real source of trouble, and of course it’s always sort of the social relations that we’re talking about. You could say, make the same argument about law. Why are we talking about law property? Isn’t the underlying power relations, right? So, in that sense it’s a similar argument here again, but I think there’s also something new added by this particular way in which data is collected and used and weaponised that we have to think about it. Might be a better term to capture this than this acid invoking notion of data but I hear your trouble. I sort of agree with that as well.

Prof Kim Weatherall:

Maybe, yeah. Maybe multiple terms. Maybe it’s not just one term you know, not data but lots of things. Sorry, I’m interrupting.

Salome Viljole:

No, similarly I get this question a lot and I sort of answer like, I wish! I wish you didn’t have to care about data. But you know, I think it is relevant, not only how foundational and input it is to informational, it’s the information of informational capitalism, and I think Katharina’s work captures this property quite well. It’s important or it’s relevant for us to focus on data because of the way in which building up kind of data power and data capital, if you want to call it that, helps companies kind of evade the traditional way that we think of regulating – like climate change on the one hand, and like pregnancy on the other. And the sort of biological metaphor. Because I can’t resist reaching for metaphors even though I know that they’re all limited when it comes to data, is really – I think of it as like a stem cell. So, if it becomes regulated, you know, if it becomes attractive for Google to become a health company, they can sort of use their rich store of information to become a health company and they can become a health company in exactly the way that they can become a health company while evading all of the health care regulations that we have. And if they want to become a consumer credit company they can morph that same data set into becoming a consumer credit company. And they can develop that consumer credit company in exactly the way that they need to, to avoid the regulations that we have for consumer credit. So, therefore for me, it’s relevant to focus on the information because that’s kind of the underlying thing that can be sort of morphed or expressed in various ways to act upon people, in whatever ways are convenient and profitable for a company. While at the same time, very much sort of minimising regulatory costs. And so that’s kind of on the economic side of things for me, you know, as someone in this piece who’s very much interested in that kind of social inequality the quality of the materialised social relation that happens here. I think it’s relevant that it’s happening on digital infrastructures and in sort of data-mediated ways that are I think, normatively distinct from the way that we’ve dealt with the stuff in analogy sort of, non-digitally mediated ways. It introduces all of the problems of centralized power and centralised control that Katharina’s sort of centrally interested in here, which is to say that a lot of these social constructive processes that are happening and that are acting upon us and that have, in egalitarian effects, are the by-product of a handful of companies seeking to do profit maximisation. And that’s quite distinct from how a lot of social injustice has sort of previously happened.

Dr Jake Goldenfein:

Fabulous. Seth?

Seth Laser:

Thank you. So, again, just Erica and the others, that was incredibly stimulating and interesting. So, I’m a philosopher and I’m sort of coming into this area from that side. And one of the things that you know – the last question kind of led you into and that has was sort of bubbling under various things that you said, was if you like the sort of underlying moral diagnosis, there’s a lot of the work that you just described which is about sort of understanding how these systems actually work and sort of providing a way of thinking through the particular power relations that they’re constituted by. But then at several points you were referring to the sort of thing that provided the moral motivation for change if you like. Katharina, you said often things like, it’s data is being used against people, and that’s the thing that’s kind of driving it. Salome, you talked about how data is sort of encoding unjust social relations and sort of reproducing them. And I just wanted to kind of – and there was a lot of um discussion I think, of exploitation as well, although the exploitation idea was sort of a little bit in tension with describing it as not really being like property or labour. So, what I’d like to do is to ask you just to sort of drill down into the underlying normative concern and sort of maybe think about someone who might make a counter argument from the other side. So, think about something like in the context of manipulation and using data against you, what if someone were to come along and say well all they’re really doing is using it in order to target advertisements. And like yeah, okay, sometimes that can be really objectionable like the much cited example of the person receiving from target – the pregnancy stuff, you know. Okay, fine. Sometimes it’s really bad, but like most of the time it’s just you know, you’re into the gym and so you get adverts and new gym stuff and that’s not really manipulative. You can always turn off. So, how do you kind of taxonomise the underlying moral problems, and how do you respond to those who might kind of play them down?

Prof Katharina Pistor:

Salome, you should take this first.

Salome Viljole:

Sure, yeah. So, my paper started as a political philosophy paper, so I feel like I was set up for this question. So, yeah, I kind of come at this carrying two philosophical commitments into the work. One is sort of a commitment of a particular social epistemology. So, I take the view that a lot of the social inequality that’s produced is actually in how we create and act on sort of socially relevant categories of oppression. So, part of what makes me a woman in this social place and time, is the ways in which that category can be used against me or place me in positions of subordination. And then alongside that, I kind of carry into this piece a kind of political philosophy of relational egalitarianism. So, if you understand the world as relational materialising social relations, that’s the project that data infrastructures are doing, then the sort of political goal is to equalise those social relations or the appropriate locus of justice is to create those conditions of equality in our data relations and you know, being a good relational egalitarian, I sort of turned to – okay, the condition of democracy defines the just social relation. So, that’s sort of how I would taxonomise the underlying philosophy or sort of moral approach in my piece. What would I say to someone who’s critical? Well first of all, a lot of the stuff is being operationalised well outside and beyond ad targeting. So, you know just in terms of the scope in which this is being applied, one can sort of give the back of the hand to that narrow application, but you know, more generally, I could say well that’s all very well and good for you if you are lucky enough to enjoy a privileged relationship in which you’re not placed in social relations of oppression, but these same goofy advertisement practices are being used by immigration and customs enforcement in this country to geo-locate suspected undocumented immigrants, not only on the basis of their own location but on the basis of the location of people that they’re known to associate with, and to detain them and to send them back to countries where they’re fearing for their lives.

So, you know, one may not be concerned centrally only with your own individual process project or your own individual kind of relationship with getting a notification at the gym, but also think about kind of the social processes of oppression that you might be drafted onto, as a condition of engaging in digital life. And one might be politically troubled by being drafted into that particular political project.

Prof Katharina Pistor:

Yeah and I think I would add to that also, this performativity aspect, the extent to which the data being fed back into that. I mean you know, Cambridge Analytica and some of the – also you know, Myanmar, the genocide there. The way in which people are incited to do certain things with the data that’s in you know, whether it’s intentional or not. But it’s a way in which entire social processes are amplified, not completely reinvented, but are amplified, that I think raises a lot of concerns well beyond the market just coming back to the market. I do not a philosophical argument, but I do a little bit of a tongue-in-cheek transaction cost exercise in the paper by saying you know, people have always argued that firms only exist because transaction costs are too high. We can’t do everything in markets but markets otherwise could do everything. And now you reduce transaction costs to zero and behold what do we get? The greatest control structures ever, right. So, then people have pointed out to me that maybe you know, this is it. It’s not fair, basically, to make that comparison. But I think it just tells you that the underlying argument about the relationship between markets and structures of power have always been a little bit misguided. I think the amplification of this additional power over entire groups of people’s societies, populations, as Salome puts it – that is of particular concern. So, it goes well beyond the exploitation of individual labour.

One point I also try to make in the book, I think we have to liberate ourselves from the notion that exploitation happens in one particular social relation. It’s much more complex and the financial system, it’s you know, it’s not really exploiting labour directly but it’s being bailed out by socialising the cost of this particular financial system time and again. And here we’re using basically the willingness of people and their increasing dependence on the internet to harvest their data, to exert control over them. With of course then the big sort of negative vision I think most of us have in our mind, is sort of the social credit system in China. And you can imagine something very similar in our capitalist market economies. And so when Zuckerberg says you know, let us do it because otherwise we have China, I don’t really see the fundamental difference between the two. But I think it’s deeply troubling whether you know from an individual freedom perspective, but I think even more so from a societal – and the idea of self-governance in some meaningful way is being undercut.

Dr Jake Goldenfein:

Okay, Christine, did you have a question as well?

Prof Christine Parker:

Yes, I have a question and it follows on quite well from Seth’s question about conceptualising what is the harm that we’re talking about. I’m particularly interested in thinking about the environmental harm, and I noticed you used a few metaphors that related to other living beings and environment. And Salome, you mentioned the need to use data for climate change governance. And these are really genuine questions I guess.

I have three questions. So, one is what if we expanded our conception of what the harm is from encoding social inequality to also living beings and ecologies, or ecological resources? Whose data is being harvested and used against them? So, you know, apparently we’re going to be able to use data to pinpoint oil reserves much more quickly and exploit them or genetic material about animals to re-engineer them in a way that’s probably pretty cruel to them. That’s the first question. These other more than human world has data and has some sort of rights or interest in its data, then there’s the question of how the data governance that is currently encoded – we’re talking about it putting the capitalist system on some sort of steroids – so that means increasing consumption, increasing production, increasing extraction. So, shouldn’t that be one of our major concerns here? And of course that has social impacts as well as ecological impacts because we live in this world, and we’re extracting resources under indigenous people’s rights, and so on. And then the third I’m going to sort of pick up on something you said Salome, because I think you know, I’m sure you’ve thought about this more deeply than what it came across, but what assumption are we making if we think that data governance could be useful for things like addressing climate change or addressing over-consumption? What assumptions are we making about how we’re going to use data to govern who, to solve these problems. Do we really think that we can use data to make sure that I buy more sustainable shoes or whatever, because I think I’m probably just going to buy more shoes and maybe they’ll be more sustainable each individual one but I’ll probably just end up buying more of them and that’ll be a problem.

Yes I wanted to raise those three sort of questions about expanding our conception of the harm, potentially in an environmental direction.

Salome Viljole:

Sure. Maybe I’ll start since I think I kicked off a few of those. Yeah, so I’m actually a big fan of berth and Martha Nussbaum’s recent work on really expanding rights to non-human creatures, and sort of thinking of political quasi-political rights for non-human interests. And I would totally you know, I think my project is really about expanding the set of interests that we think are relevant when we’re thinking about how information is collected, processed, used, and shared, and that might very well involve thinking about the interests of animals. Thinking about the interest of ecosystems. I also think we can extend this across time and think about the interests of future generations and how we produce information, and for which purposes we produce information. So, yes I’m generally in favour of expanding the set of relevant – legally relevant interests that go into thinking of how we produce information.

Yeah, I mean, I think a big problem right now is that a lot of our information production is done thinking about the interests of a handful of private companies and how to maximise profits for them, as a as opposed to actually taking into account the many interests that a lot of us have in the sort of industrial policy, if you want to call it that, of what information we collect for which purposes. So, right now a lot of information production and data production is being put to extractive purposes. I think that if we did a fair accounting politically of all of the interests at stake in our information production we wouldn’t engage in that kind of information production. We’d probably be putting our informational resources to very different purposes. If we actually took into account in a more democratic way, all of the interests at stake in the way that we’re sort of channelling this informational resources in terms of the assumptions that are going into the idea that data governance can be useful. I think probably, the most damning one being made from a climate change perspective, is getting back to this materiality of data, that computation is a resource-intensive practice. And it might very well be the case that a great deal of computation that might otherwise be socially beneficial and useful for us to undertake, is just too resource intensive or you know, if we were to actually fairly account for it, be a net loss for us in terms of our climate benefits. So, that’s probably the most damning assumption.

I also think that you know, we really are primed to think that data collection and data use is really bad because the ways that data is being collected, in days being used right now is really bad. But you know, again I am also committed to the idea that if we actually want to allocate CO2 resources fairly across all of the nations in the world, we want to think about allocating extremely scarce resources as we head into a climate crisis sort of world, like water resources, food resources. We have to actually take great care to allocate our resources with efficiency. And I don’t really know how we do that without relying on some sort of information infrastructure. And I actually think that information infrastructures that are managed more justly can probably do a better job of doing that more justly, than the highly concentrated private markets that we’re currently relying on to do that kind of allocation.

Prof Katharina Pistor:

Just to add a couple of things. You know, I think we could actually have a much more decentralised information collection and much lower levels of computational power that we would need than to exploit these data. If we were very clear about for which purposes we need them and use them and have also some kind of a normative commitment to use them in a particular way rather than others as I said before. I think the enormous amount of data collection is happening right now is for the potential future monetisation and in part, posturing that this largest really is needed. But I think we – even though we live in times of uncertainty and we don’t know when and where exactly, what will happen, and even all the information collection will not tell us that, we could be much more sensitive to shortages in different parts of the world and respond to that if we made the right commitments in that sense.

I think we need information. We just need to know exactly for what purposes and how we govern that information, and who makes decisions about how to use that. That’s the critical aspect.

Dr Jake Goldenfein:

Okay, so we have a lot of really brilliant questions in the Q and A, and really only limited amount of time. So, we probably only have an opportunity to look at one or two of these questions and I’m going to choose them just based on keeping this particular thematic flow rather than any judgment as to quality or anything like that. And unfortunately time doesn’t trump here. So, hopefully we can find a way to continue the conversation after this seminar and indeed, we have more events coming up and so we’ll have a think about that as well. And we’ll get in touch with participants. So, one question here which I think continues what we’ve just been talking about Fleur John’s, it asks, is there not something problematic or politically misguided about strategic reliance on the humanist reduction of data relations to relations between people, given the extent to which digital proxies are in Shane Denson’s words, fundamentally decorrelated from human subjectivity, if we think about the latter in aggregated or individualized formats.

Salome Viljole:

So, I guess maybe I’ll start since I’ve been talking about data relations. Yeah, that’s a really interesting question. So, I guess when I talk about data relations I’m actually interested in us moving beyond the idea that what is of legal relevance is how I am reduced or rendered legible to a system, and how that reduction or legible kind of facsimile of me may or may not miss key elements of myself. I think there are absolutely interesting and normatively important questions to ask about what is missed in the – like Salome – as opposed to the like me as full person. But I also think that the reason I focus on relations is that at a certain point, the ways that these kind of Samar kara, or these homunculi or however you want to think of the legible reductions of us that are flat and mistaken and fundamentally a human – they get used in the world. They get used in the world to make decisions about people. They get used in the world to deny decisions about people. They get used in the world in a way that socially constitutes the conditions that we find ourselves in, in ways that can reproduce legally relevant forms of inequality. And so by focusing on the condition of those relations, that’s kind of what I’m interested in getting at. At a certain point those things are used for reasons, and those reasons, they may be flawed, they may be inaccurate and they may be kind of violations of my inner person. But they’re also kind of reproducing forms of social inequality and that’s of legal relevance. Almost sort of separate from, and regardless from, the condition of that little homunculi if you want to call it that.

Prof Katharina Pistor:

Why don’t we take another question and we don’t have to both answer, because we’re running out of time.

Dr Jake Goldenfein:

Sure. From Ellen Goodman. Conceiving of data harms at the population level and in terms of externalities which I support is alien to most of the regulatory apparatus democracies are using to haltingly deal with these harms, what kind of regulatory form other than trusts might be useful and what kinds of individual level harms should we be willing to accept in order to benefit the population, see species ecosystem versus individual animal preservation debates.

Prof Katharina Pistor:

You know, I think we can we can think about a range of other maybe organisational forms, but the idea is basically that you have to have some kind of collective decision making process that has some binding power over longer periods of time with some adaptability. But we need something of that sort to make sure that we have a compact to which we agree on. What data shall be harvested and how it shall be used. Now, animals and other non-human creatures don’t have any decision making power themselves, so it’s up to us to make these type of decisions and make them sufficiently immutable. Many other humans don’t have the time and resource to do the same thing but I think you know, that’s sort of the direction I would think about this. Now, our current regulatory framework doesn’t really address these issues, but I think what we’re seeing here is that just treating it as a regulatory issue, just drinking is an asset issue – an antitrust issue – is insufficient because the scale and scope of the transformation of social relations and power relations with this goes beyond that. And I think we really need something closer to a sort of basic new kind of social compact, in but not necessarily on a grand national or supernatural scale. Also, local contact compacts, and then think about how to scale this by having multiple communities engaging in similar conversations and similar commitment devices.

Salome Viljole:

So, I would echo all of that. I mean Jake, do you want me to not answer? We can get more questions maybe, that way, or should I feel free to answer? Okay, yeah. I would echo all of that. I also think that once you understand the kind of general instance of the problem to be sort of developing better collective decision making processes, you can also start to differentiate between different ways of achieving that for different kinds of data or for different qualities of data relation. So, that might take the form of a municipal trust with respect to local sort of location movement data and local kind of ecological data. It might take the form of granting labour authority rights over workplace information, so that you know, unions can bargain over the conditions of surveillance that they’re placed into. They can maybe also place conditions of surveillance back onto managers and on bosses as a sort of a condition of labour negotiation. So, we can think of democratising data relations in the workplace and you know, I think you can also think about things like a few European politicians have talked about just reverting all of the information that’s been collected by private entities into the public interest managed by some sort of public entity. So, something like a coalition of national statistical offices, national science foundation’s, and sort of cleaning that data. Managing it for the public good. And then you know, social scientists or scientists, or other researchers could sort of apply to use that data for social science research. And that would be something like a limited monopoly, right, that the state is now reclaiming this public resource for public management back from the state. That could be a trust. It could be some sort of independent agency. But that’s kind of another model.

Dr Jake Goldenfein:

Alright, that’s brilliant. I had thought that perhaps we could answer one more question about the applicability of these ideas in across various contexts. Such as, there’s a few questions about healthcare and one about universities in the Q and A. But we are out of time and I recognise that it’s late for our speakers, and people have other commitments. So, before just handing over to to finalise everything and say goodbye, I just want to thank our whole panel, and especially our speakers, so much, for what was such a brilliant and fascinating conversation. And I hope so enlightening for everyone. It was really marvellous and I’m so glad that we could have it together. So, thank you very much. And Christine, with the final life.

Prof Katharina Pistor:

Thank you for great moderation.

Prof Christine Parker:

A huge thank you to Katharina and Salome, and for giving up your Thursday eveningm tearing yourself away from dinner or whatever to participate in this invigorating discussion. And also to Jake who obviously put a lot of work into preparing and facilitating this discussion, as well as organizing this whole seminar series and getting it started. So, also I just want to thank Loren Dela Cruz and Katherine Nichols from the centre of excellence, who’ve been doing all the hard work behind the scenes to make this seminar happen smoothly. Thank you to you, and a reminder to everybody who joined us, so this is the first in a series of four seminars which will be occurring on a fortnightly basis. The next seminar will be held in two week’s time, at the same time on Friday the 23rd of April. And the topic is disciplining the market or being the market, and considers the regulatory governance of platforms. And we have another two intellectually stellar guests, Professor Julie Cohen of Georgetown law centre, and Associate Professor Kean Birch of York University. And the conversation will be hosted by professor Kimberly Weatherall of the centre of excellence and University of Sydney, and Professor Seth Laser of HMI at the ANU. And you can register again via the centre of excellence website. And we do expect to make a recording of today’s seminar available via our website in due course. So, thank you very much to everybody for joining us, and thank you again to the speakers.

SEE ALSO