The methods by which people harm others evolve with changes in, and in access to, technology. Several cognitive, linguistic, and behavioral theories have suggested that biased language use is correlated with dominance and can reduce the diversity and inclusivity of a community (e.g. Poteat et al, 2010). We present a cross-cultural and cross-linguistic study of moderators on Reddit in English, Arabic, and French. We collect and analyze a large Reddit moderation dataset and use machine learning models to study cognitive and behavioral differences of moderation across cultures. We then work with expert linguists who analyze and evaluate our results. Finally, we explore the implications of our models for studying how we might shut down voices from different communities by not moderating online content properly. Our preliminary results reveal biases towards women and minority groups, and more broadly affirm our hypothesis that culture and topic of discussions bias moderation decisions.