Behind all supervised learning problems is an optimization problem. Solving these problems reliably and efficiently is a key step in any machine learning pipeline. This thesis looks at efficient optimization algorithms for a variety of machine learning problems (in particular, sparse learning problems). We first begin by looking at a new class of algorithms for training feedforward neural networks. We then look at an efficient algorithm for constructing knockoff features for statistical inference. Finally, we look at $\ell_0$-penalized and constrained optimization problems and a class of efficient algorithms for training these non-convex problems while providing guarantees on the quality of the solution.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.