Breathing, laughing, sneezing and coughing are all important human behaviors that are generated in the torso. Yet, when these behaviors are animated, the movement of the human torso is often simplified and stylized. Recent work aiming to depict the movement of the torso has focused on pure data-driven approaches such as a skin capture of an actor using a motion capture system. Although this generates impressive results to recreate the captured motion, it does not provide control to an animator. Procedural methods have been used to create the motion of the torso and would provide animator control, but there is a large amount of interplay among the different parts of the human torso that would be difficult to capture using this method. We present a novel technique that uses an anatomically inspired, physics-based torso simulation that includes a mix of rigid and deformable parts. This torso simulation uses muscle elements and proportional-derivative (PD) controllers to generate forces and torques to drive the motion. In addition, we develop multiple ways to control the simulation. We present high-level controls that allow us to modify the strength and frequency of the motion, as well as two automatic ways to drive the torso simulation using optimization techniques. The first uses motion capture equipment to define breathing input signals for a specific subject, whereas the second uses only an audio track to generate a laughter animation. The techniques presented in this dissertation come from two main publications: Breathe Easy: Model and Control of Simulated Respiration for Anima-tion and Laughing Out Loud: Control for Modeling Anatomically Inspired Laughter using Audio.