Fairness, explainability, and transparency in automated decision making
As more aspects of our lives become automated by computers, it is becoming increasingly important that these decisions (the input, processing, and output) are human-interpretable. This is especially important when examining whether an automated decision is biased or unfair towards groups over protected variables (eg gender, race, etc), as these can have broad, long term, hidden impacts on society. Furthermore, it should be clear as to why a certain decision was reached, and what inputs affect this decision. This thesis examines the robustness and stability of current fairness strategies, and looks to resolve the mathematical conflict between group fairness and individual fairness. His work also looks at the scalability of automated explanations for machine learning models, and questions whether explainable artificial intelligence induces fairness and utility or, ultimately, reduces it.
Prof Flora Salim, University of New South Wales
Dr Jeffrey Chan, RMIT University
Dr Kacper Sokol, ETH Zurich