Optimization on Riemannian manifolds is a topic that draws attention widely in the optimization community due to its applications in various fields. The problem differs from classical nonconvex optimization due to the loss of linearity. In this dissertation, we inspect the optimization problems on Riemannian manifolds with different aspects.
In the second chapter, we consider stochastic zeroth-order optimization over Riemannian submanifolds. We propose estimators of the Riemannian gradient and Hessians and use them to solve Riemannian optimization problems in the multiple settings and analyze their convergence theoretically. We also provide numerous numerical examples to verify the efficacy of the proposed method.
In the third chapter, we continue the topic of the second chapter by incorporating Riemannian moving-average stochastic gradient estimators. This improves the analysis of the previous chapter by achieving optimal sample complexities to get $\epsilon$-approximation first-order stationary points with batch-free iteration. We also improve the algorithm's practicality by using retractions and vector transport which reduces per-iteration complexity.
In the fourth chapter, we consider the federated learning problem on Riemannian manifolds, with applications such as federated PCA and federated kPCA. We propose a Riemannian federated SVRG method and analyze its convergence rate under different scenarios. Numerical experiments are conducted to show that the advantages of the proposed method are significant.
In the last chapter, we consider a class of Riemannian optimization problems where the objective is the sum of a smooth function and a nonsmooth function. We propose a Riemannian alternating direction method of multipliers (ADMM) with easy computable steps in each iteration. The iteration complexity of the proposed algorithm for obtaining an $\epsilon$-stationary point is analyzed under mild assumptions. Numerical experiments are conducted to demonstrate the advantage of the proposed method.