AI systems that dynamically navigate the human world will sometimes need to predict and produce human-like moral judgments. This task requires integrating complex information about human moral cognition (what decision would humans make in this situation?), normative ethics (what is the right decision for an AI to make?), and artificial intelligence engineering (how can we implement this functionality in AI systems?). A range of solutions have begun to emerge within the cognitive science community to satisfy these three categories of demands. However, most solutions tend to satisfy some demands, while falling short on others. This symposium highlights four competing solutions for building AI with a human-like moral sense, with the goal of highlighting the strengths and weaknesses of each approach and how each might complement the others in development and deployment going forward.