- Main
Fairness-Preserving Empirical Risk Minimization
- Yang, Guanqun
- Advisor(s): Roychowdhury, Vwani P
Abstract
The concerns regarding ramifications of societal bias targeted at a particular identity group (for example, gender or race) residing in algorithmic decision-making systems have been ever-growing in the past decade. It is a common practice of machine learning models' participation in these systems through empirical risk minimization (ERM) principle, which is often the cause of unfairness by trading off underrepresented groups for overall performance. Despite the importance of preserving fairness in such systems, there is hardly consensus in defining unified fairness metrics, designing widely-applicable bias-mitigation algorithms, and delivering interpretable models abiding by the ERM principle. The situation is made even more grievous when non-structural data, including text, image, and audio, is involved in these systems due to the unavailability of the well-defined identity attribute. Current approaches attempt to tackle algorithmic bias in non-structural settings from data itself and intermediate representation together with the inference component within models. In this thesis, we propose to unify all three bias-mitigation operations into one streamlined machine learning pipeline. At the same time, to provide interpretable results, the explorations will be made while carrying out debiasing procedures, and theoretical justifications will be provided accordingly. By ameliorating different bias-mitigation strategies through synergistic effects and addressing model transparency issues by investigating internal representations, we show that the proposed pipeline could provide interpretable machine learning models that embody fairness across different identity groups in numerous non-structural data settings.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-