- Main
Building Accountable Natural Language Processing Models: on Social Bias Detection and Mitigation
- Zhao, Jieyu
- Advisor(s): Chang, Kai-Wei
Abstract
Natural Language Processing (NLP) plays an important role in many applications, including resume filtering, text analysis, and information retrieval. Despite the remarkable accuracy enabled by the advances of machine learning methods, recent studies show that these techniques also capture and generalize the societal biases in the data.For example, an automatic resume filtering system may unconsciously select candidates based on their gender and race due to implicit associations between applicant names and job titles, causing the societal disparity as indicated in [BCZ16]. Various laws and policies have been designed and created to ensure societal equality and diversity. However, there is a lack of such a mechanism to restrict machine learning models from making bias predictions in sensitive applications. My research goal is to analyze potential stereotypes exhibited in various machine learning models and to develop computational approaches to enhance fairness in a wide range of NLP applications. The broader impact of my research aligns well with the goal of fairness in machine learning -- in recognizing the value of diversity and underrepresented groups.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-