SEARCH EVENTS

Loading Events

« All Events

  • This event has passed.

Practical Machine Learning Explainability: Surrogate Explainers and Fairwashing

5 November @ 11:00 am - 3:00 pm AEDT
Text: ADM+S Members only. Practical Machine Learning Explainability. Image: Traditional Tibetan weaver in Dharamsala India working at a loom, framed by yellow bounding boxes that classify different elements of textile craft. The image features digital distortion effects and fragmented views of weaving tools and finished textiles. Text has been selectively blanched from the composition, symbolizing whose voices are included or excluded in conversations about digital transformation.

Join this session delivered by ADM+S Affiliate Kacper Sokol and Associate Investigator Danula Hetticachchi as they introduce the three core components of surrogate explainers: data sampling, interpretable representation and explanation generation in view of text, image and tabular data.

Surrogate explainability is a popular transparency technique for assessing trustworthiness of predictions output by black-box machine learning models. While such explainers are often presented as monolithic, end-to-end tools, they in fact exhibit high modularity and scope for parameterisation. This observation suggests that each use case may require a bespoke surrogate built and tuned for the problem at hand.

This session introduces the three core components of surrogate explainers: data sampling, interpretable representation and explanation generation in view of text, image and tabular data. By understanding these building blocks individually, as well as their interplay, we can build robust and trustworthy explainers. However, we can also misuse these insights to create technically-valid explainers that are intended to produce misleading justifications of individual predictions. For example, by manipulating the size and distribution of the data sample (or the grouping criteria of the interpretable representation) an automated decision may be shown as fair despite the underlying model being inherently biased. This overview of theory is complemented by a low-code hands-on exercise facilitated through an iPython widget delivered via a Jupyter Notebook.

Delivered by ADM+S Affiliate Kacper Sokol and ADM+S Associate Investigator Danula Hettiachchi

ADM+S Members Only – Registration and Zoom link via ADM+S Calendar invite

Details

Venue

  • ADM+S Centre, RMIT University
  • 106-108 Victoria Street
    Carlton, VIC 3053 Australia
  • Phone 0399250226
  • View Venue Website

Organiser

  • ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S)
  • Email admsevents@rmit.edu.au
  • View Organiser Website