One of the outstanding challenges in the field of human-computer interaction is building assistive interfaces that help users with the perception and control of complex systems, such as cars, quadcopters, and prosthetic limbs. In this thesis, we propose machine learning algorithms for automatically designing personalized, adaptive interfaces that improve users' performance on sequential decision-making tasks. First, we present work that uses theory of mind to model irrational user behavior as rational with respect to incorrect internal beliefs about how the world works, and leverages this assumption to assist users by modifying their observations and actions. Second, we present work that uses model-free reinforcement learning from human feedback to fine-tune user actions, with minimal assumptions about user behavior. We demonstrate the effectiveness of our methods through experiments with human participants, in which users play the Lunar Lander video game, perform simulated navigation tasks, and land a quadcopter.