Skip to main content
Open Access Publications from the University of California

Understanding Unconscious Bias by Large-scale Data Analysis

  • Author(s): Tang, Shiliang
  • Advisor(s): Zhao, Ben Y
  • Zheng, Haitao
  • et al.

Biases refer to disproportionate weight in favor of or against one thing, person, or group compared with another. Bias study has long been an important topic in psychology, sociology and behavioral economics. Over the years, people have observed large varieties of biases, proposed explanations to how they come, and identified how they impact our society.

A large amount of biases happens in an unconscious manner. They are built-in shortcuts in our brain that processes information automatically. Because of the unconsciousness nature of biases, studying biases usually comes from carefully designed controlled experiments to discover or explain unconscious biases. Over the past century, these experiments have identified over 200 different kinds of human biases and developed rich to explain the underlying mechanisms of biases.

However, because experiments happen in isolated environments with a limited number of participants, they do not reflect these human biases in the wild. This task is hard in the years when the biases were first discovered when large-scale user behavior data are not largely available. With more and more user activities move online, gathering user behavior data are becoming more easier. These data provide us valuable opportunities for examining how human biases affect our society in the wild, and how biases affect our society when a large group of biased people freely interact with each other.

In this dissertation, we use empirical approaches to understand and measure human bias using large-scale datasets. We do not aim at identifying new types of biases or measuring biases of individuals, but we focus on observing aggregated outcome of a group of biased people interacting with each other. This dissertation contains 4 studies strongly related to this topic.

The first two studies measure irrational behavior in the financial domain. We start with examining the quality of information and communication in online investment discussion boards. We show that positivity bias and skewed risk/reward assessments, exacerbated by the insular nature of the community and its social structure, contribute to underperforming investment advice and unnecessary trading. Discussion post sentiment has a negligible correlation with future stock market returns, but does have a positive correlation with trading volumes and volatility. Our trading simulations show that across different timeframes, this misinformation leads 50-70% of users to underperform the market average. We then examine the social structure in communities, and show that the majority of market sentiment is produced by a small number of community leaders, and that many members actively resist negative sentiment, thus minimizing viewpoint diversity.

Then we study the phenomenon of herd behavior of individual stocks. We hypothesize that in less mature markets, investors rely on common external inputs (e.g. technical analysis, or the generation of buy/sell “signals” using popular algorithms or software), resulting in herd behavior that moves the price of individual stocks. Our survey finds that Chinese investors in rely on technical analysis for investment decisions, compared to a minority of US investors. Next, using US markets as a baseline, we analyze two decades of historical price data on US and Chinese markets, and find significant support for the hypothesis that over-reliance on technical analysis has led to a “self-fulfilling prophecy” effect that makes prices of Chinese stocks much more predictable. Our trading simulation shows that by identifying and exploiting herd behavior, trading strategies based solely on technical analysis can dramatically outperform markets in China, while

severely underperforming in US markets.

The last two studies examine gender bias in different aspects. We start with study the effects of potentially gender-biased terminology in job listings, and their impact on job applicants, using a large historical corpus of 17 million listings on LinkedIn spanning 10 years. We develop algorithms to detect and quantify gender bias, validate them using external tools, and use them to quantify job listing bias over time. We then perform a user survey over two user populations to validate our findings and to quantify the end-to-end impact of such bias on applicant decisions. We show gender-bias has decreased significantly over the last 10 years. However, we find that impact of gender bias in listings is dwarfed by our respondents’ inherent bias towards specific job types.

Following this study, we seek to systematically examine the problem of detecting gender stereotypes in natural language. We develop a gender stereotype lexicon that reflects the concept of gender stereotypes in the modern society, and compare the performance of traditional lexicon approach to an end-to-end deep learning approach on a large human labeled text corpus collected by us. We show that end-to-end approach significantly outperforms the lexicon approach, suggesting that in the future the widely used lexicon approach will be replaced.

In summary, we develop tools that are able to measure the human biases in large-scale, and perform large-scale measurements of aggregated behavior of human biases. We hope our work can shed light on a deeper understanding of human biases in the wild.

Main Content
Current View