Skip to main content
Open Access Publications from the University of California

Graph Based Scalable Algorithms with Applications

  • Author(s): Vaz, Garnet Jason
  • Advisor(s): Bhat, Harish S
  • et al.

In this thesis, we propose various algorithms for problems arising in nonlinear circuits, nonlinear electromagnetics and data mining. Through the design and implementation of these algorithms, we show that the algorithms developed are scalable.

In the first part of the thesis we provide two solutions to the forward problem of finding the steady-state solution of nonlinear RLC circuits subjected to harmonic forcing. The work generalizes and provides a mathematical theory bridging prior work on structured graphs and extending it to random graphs. Both algorithms are shown to be orders of magnitude faster than time stepping. We introduce an inverse problem of maximizing the energy/voltage at certain nodes of the graph without altering the graph structure. By altering the eigenvalues associated with the weighted graph Laplacian of the underlying circuit using a Newton-type algorithm, we solve the inverse problem. Extensive results verify that a majority of random graph circuits are capable of causing amplitude boosts.

Next, we connect nonlinear Maxwell's equations in 2D to the RLC circuit problem. This relationship is achieved by considering the finite volume decomposition of nonlinear Maxwell's equations. When we consider a discretization of the domain, the dual graph of this discretization provides us with a planar random graph structure very similar to our previous work. Thus, algorithms developed in the previous work become applicable. Using distributed computing, we develop an implementation of one of the algorithms that scales to large-scale problems allowing us to obtain accurate and fast solutions. Simulations are conducted for structured and unstructured meshes, and we verify that the method is first-order in space.

Our final application is in the field of supervised learning for regression problems. Regression trees have been used extensively since their introduction and form the basis of several state-of-the-art machine learning methods today. Regression trees minimize the loss criterion (objective function) using a greedy heuristic algorithm. The usual form of the loss criterion is the squared error. While it has been known that minimizing the absolute deviation provides more robust trees in the presence of outliers trees based on absolute loss minimization have been ignored because they were believed to be computationally expensive. We provide the first implementation which has the same algorithmic complexity as compared to trees built with the squared error loss function. Besides computing absolute deviation trees, our algorithm generalizes and can be used as a non-parametric alternative to quantile regression.

Main Content
Current View