EVENT DETAILS
The ADM+S Tech Talk Series brings together leading researchers and industry experts working in the ADM field, to discuss the impacts and opportunities of technological advancements.
In 2021, the ADM+S Machines Research Program commenced an innovative online seminar series, Tech Talks, that brings together leading researchers and tech-experts for thought-provoking and insightful discussions on complex topics such as misinformation, algorithmic bias, and artificial intelligence. In the ever-evolving field of ADM, it is imperative we stay informed about the ethical complications which arise as the result of increased AI and technological dependence. Tech Talks synopses are published here following the conclusion of each event.
In the ever-evolving field of ADM, it is imperative we stay informed about the ethical complications which arise as the result of increased AI and technological dependence. Tech Talks synopses are published here following the conclusion of each event.
2022 TECH TALKS
RESPONSIBLE CONTENT RECOMMENDATIONS IN NEWS MEDIA
23 March 2022
Speaker: Cristina Kadar (NZZ, Switzerland)
In this Tech Talk, Cristina discussed NZZ’s 5 years journey around content automation and personalization. She highlighted the overall goals of content recommendations, algorithm design choices, and results from several product launches and large-scale A/B tests. Cristina is a Senior Data Scientist and Machine Learning Product Owner at NZZ, Switzerland’s German-speaking newspaper of record. Additionally, she is an industry expert at the Media Technology Center of ETH Zurich – a place where researchers and industry partners work together on projects to shape the future of media technology. Cristina completed her PhD at ETH Zurich in data science and information systems and has a background in Computer Science.
RADio – RANK-AWARE DIVERGENCE METRICS TO MEASURE AND SIX (NEURAL) RECOMMENDATION ALGORITHMS
11 November 2022
Speakers: Sanne Vrijenhoek (University of Amsterdam’s Institute of Information Law) and Gabriel Bénédict (University of Amsterdam and RTL Netherlands)
Watch the recording
View transcript
In traditional recommender system literature, diversity is often seen as the opposite of similarity, and typically defined as the distance between identified topics, categories or word models. However, this is not expressive of the social science’s interpretation of diversity, which accounts for a news organization’s norms and values and which we here refer to as normative diversity. We introduce RADio, a versatile metrics framework to evaluate recommendations according to these normative goals. RADio introduces a rank-aware Jensen Shannon (JS) divergence. This combination accounts for (i) a user’s decreasing propensity to observe items further down a list and (ii) full distributional shifts as opposed to point estimates. We evaluate RADio’s ability to reflect five normative concepts in news recommendations on the Microsoft News Dataset and six (neural) recommendation algorithms, with the help of our metadata enrichment pipeline. We find that RADio provides insightful estimates that can potentially be used to inform news recommender system design.
2021 TECH TALKS
AI FOR SOCIAL GOOD
12 May 2021
Speaker: Milind Tambe (Google)
Watch the recording
View transcript
In this Tech Talk, Google researcher Milind Tambe discusses multiagent reasoning for social impact, examining results from deployments for public health and conservation. Milind Tambe is Gordon McKay Professor of Computer Science and Director of Center for Research in Computation and Society at Harvard University, as well as Director of ‘AI for Social Good,’ at Google Research India. Prof Tambe examines the ways in which AI can improve the social service sector, while touching on the risks associated with its intervention. Drawing on years of experience in the field, Prof Tambe offers insights and expertise on the evolution of AI systems, the improvements and challenges, and case-study examples of the benefits AI can provide as a multiagent system researcher.
PRESERVING INTEGRITY IN ONLINE SOCIAL MEDIA
27 May 2021
Speaker: Alan Halevy (Facebook)
Watch the recording
View transcript
Alan Halevy, Director at Facebook AI, outlined the challenge of maintaining integrity for social media companies, while highlighting some of the recent progress made in the area of integrity at Facebook. Mr Halevy analysed the challenges which interfere with this process, lending to a discussion on affective computing, and on the combination of neural and symbolic techniques for safe data management.
CHALLENGES TO DISCOVERING AND MEASURING COMPUTATIONAL HARMS
18 August 2021
Speaker: Alexandra Olteanu (Microsoft)
In this session, Microsoft researcher Alexandra Olteanu discussed the many challenges that persist in the process of identifying and solving objectional content and behaviours online. To make a platform safe for users, computational systems are used to identify and mitigate hate speech, misinformation, and discrimination, to name a few. However, these same systems can inadvertently engender, reinforce, and amplify such behaviours. Basing the discussion on recent research conducted at Microsoft, Ms Olteanu analyses why techniques for pre-empting future issues are not nearly as well developed as those used to correct already-existing issues, and how these methods could be refined to rid assumptions effecting the fairness and inclusivity of system outputs. Ms Olteanu is a researcher in Microsoft’s Accountability, Transparency, and Ethics group, with a personal specialisation in evaluating the fairness of computational systems- particularly measurements aimed at quantifying possible computational harms.
STRATEGY, RESPONSIBLE AI & INFORMATION SECURITY, AND PRIVACY AT DATA61
27 September 2021
Speaker: Dr Liming Zhu and Dr Thierry Rakotoarivelo (CSIRO)
Dr Liming Zhu and Dr Thierry Rakotoarivelo joined ADM+S members for this Tech Talk on Strategy, Responsible AI & Information Security, and Privacy. Drawing on their experience as researchers at CSIRO’s Data61, the discussion analysed how in recent years, many ethical regulations, principles and guidelines for responsible AI have been issued by governments, research organisations, and enterprises, without clear guidance on how to implement these responsibly. Through an ethical lens, our guest researchers explain the necessity of having theoretical guarantees for new algorithms, and how the risks related to data privacy could be mitigated with the development and deployment of provable and fair mechanisms. Dr Liming Zhu has a background in ethical AI, software engineering, blockchain and cybersecurity, providing an extensive foundation of knowledge in this area of developing importance. Dr Thierry Rakotoarivelo’s research focuses on data privacy and information security, with a particular interest in the design and use of frameworks for privacy risk assessment, the development of provable privacy mechanisms, and the study of utility/privacy trade-off in specific application domains.
MIXED METHODS EVALUATION OF SEARCH AND RECOMMENDATION
26 October 2021
Speaker: Praveen Chandar, Christine Hosey, and Brian St. Thomas (Spotify)
Advanced user-focused metrics are used to assess and improve system performance for platforms such as Spotify and require a mixed method approach. In this Tech Talk, we discussed how qualitative insights and design decisions can restrict and enable the data collection process, why recommender systems that use data-logging inaccurately present assumptions, and how qualitative analysis methods can alter these assumptions, to be more explicit and expressive of genuine user behaviour.Praveen Chandar, Christine Hosey and Brian St. Thomas are researchers working in data and evaluation at Spotify. Mr Chandar’s expertise include machine learning, information retrieval and recommendation systems. Ms Hosey is a behavioural science researcher, with a primary focus on the development of fair and ethical recommendation systems, and Mr St. Thomas is a data scientist, specialising in online experimentation methods and metric development.
CONTACT
Tech Talk enquiries
Dr Danula Hettiachchi
Research Fellow, RMIT University
danula.hettiachchi@rmit.edu.au