Reset control is a technique that augments traditional dynamic feedback controllers with a mechanism to adaptively or periodically reset a memory state in such a way as to improve transient closed-loop behaviors such as overshoot and settling time. It has been shown to overcome inherent limitations of linear time-invariant controllers, enabling performance improvements in a wide variety of applications, including industrial high-precision motion systems and electromechanical automotive systems. As reset control finds broader applications, it faces challenges in implementation and analysis, especially due to the prevalence of nonlinearities, such as those arising in the dynamics of robotic and vehicular systems, as well as those arising in the cost functions that are to be optimized in such systems. One challenge lies in the need for a feature known as temporal regularization, which is generally necessary to guarantee robust stability properties of reset control systems and can be difficult to implement effectively while preserving benefits of resets. Another challenge lies in the inherent discontinuity of control signals produced by reset controllers, which can be detrimental to hardware in physical systems.
This dissertation studies the recently introduced notion of soft resetting, which addresses the above limitations by implementing reset behaviors in an approximate sense, allowing resets to occur gradually rather than instantaneously and doing so with tunable fidelity of approximation. It is shown that, if a traditional reset controller admits a strongly convex energy function that certifies passivity, there exists a soft-reset controller that approximates the behavior of the traditional controller while inheriting its passivity properties. The implications of this result are discussed for nonlinear and multi-agent problems having nonlinear cost functions to be optimized in steady state. Then, connections are drawn between discrete-time analogues of soft-reset systems and accelerated gradient methods for numerical optimization, for which resetting has historically been referred to as restarting and has been shown to improve convergence behaviors in applications such as machine learning. Specifically, for convex problems, linear matrix inequalities are constructed for numerically certifying exponential convergence, while for nonconvex problems, asymptotic stability in probability of global minima is studied for a class of stochastically perturbed accelerated gradient methods with resets. Soft resetting is numerically demonstrated on various problems, including vehicular formation control and online parameter identification.