Recent years have seen remarkable advances in the development and use of Artificial Intelligence (AI) in image classification, driving cars, and writing scientific articles. Although AI can outperform humans in many tasks, there remain domains where humans and AI working together can outperform either working alone. For humans and AI to work together effectively, the human must trust the AI bot to the right degree (calibrated). If the human does not trust the bot sufficiently, or conversely trusts the bot more than is warranted, the human-bot team will not perform as well as they could. We report three experiments examining trust in human-AI teaming. While existing studies typically collect binary responses (to trust, or not to trust), we present a novel paradigm that quantifies trust in a bot-recommendation in a continuous fashion. These data allow better precision, and in the future the development of more refined models of human-bot trust.