The dissertation focuses on understanding parameter influences on variants of kernel regression models over graphs. Graphs are used to represent complex systems where components in the system are modeled as nodes, and relationships among the components are denoted as edges connecting nodes. Kernel regression models can be used to solve graph-related problems such as graph signal reconstruction and prediction. In the graph signal reconstruction problem, a common case is to predict an unknown attribute of a node using known values of the same attribute from other nodes and the network structure. In the graph prediction problem, a common case is to predict a graph signal over the network based on historical graph signals. The essence of the two problems is to model the input-output relationship, and the kernel-based regression model with an iterative solution is a simple yet possibly powerful solution. The dissertation will first show an application of the kernel regression model on the graph signal reconstruction problem over multi-layer graphs. The graph signal reconstruction problem aims to estimate unknown nodal values based on known nodal values and the multi-layer network structure. Viewing the mapping from the local network structure of a node to the nodal value as a function in a Reproducing Kernel Hilbert Space (RKHS), a regression model based on multiple kernels is built and a minimization problem is formatted. With the gradient descent algorithm, it is easy to find the solution to the minimization problem iteratively. In this application of the kernel-based regression model, the predicting ability of the model is verified. It is also seen from the application that the single-kernel models are used as building blocks of a multi-kernel model and that the performance is de- pendent on the hyper-parameter settings on the single-kernel regression models. To achieve better performance with less computational cost by selecting suitable hyper-parameters for the model, the dissertation then presents an analysis framework to analyze the influence of the hyper-parameters on the predictions of single-kernel regression models. Noting that due to the iterative nature of the model solution, it is hard to figure out the influence of the hyper-parameters directly. So, the main idea of the proposed framework is to express the model prediction as a weighted sum of the training observations, and then to analyze the influence of parameters on the observation weights. With the framework, it is found that the weights of the parameters are scaled kernel values of the input for prediction and inputs for training observations. This verifies that the kernel performs as a similarity measure and shows that the scaling factor for kernel values is related to the time difference between the two inputs of the kernel. The framework helps better understand the impact of the hyper parameters and hints at suitable selections of those parameters. After that, the framework is generalized to do parameter analysis for an iterative solution of a kernel regression model dealing with the graph signal prediction problem where the input is agnostic. In the generalized framework, the solution acquired from the batch gradient descent algorithm can be analyzed, making the solution acquired from the gradient descent algorithm a special case.

## Type of Work

Article (35) Book (0) Theses (6) Multimedia (0)

## Peer Review

Peer-reviewed only (39)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (1) UC Davis (6) UC Irvine (13) UCLA (8) UC Merced (6) UC Riverside (5) UC San Diego (2) UCSF (0) UC Santa Barbara (0) UC Santa Cruz (3) UC Office of the President (7) Lawrence Berkeley National Laboratory (9) UC Agriculture & Natural Resources (0)

## Department

Research Grants Program Office (7) Tobacco-Related Disease Research Program (1)

BioSciences (5) Samueli School of Engineering (4) Biomedical Engineering (4) Mechanical and Aerospace Engineering (1)

## Journal

## Discipline

## Reuse License

BY - Attribution required (3)

## Scholarly Works (41 results)

We compute the images of polynomial GLN-modules and the coordinate algebra under the Etingof-Freund-Ma functor [5]. These yield Y-semisimple representations of degenerate ane and double ane Hecke algebra of type C. We give a combinatorial description of the image in terms of standard tableaux on a collection of skew shapes and analyze weights of the image in terms of contents. For the nondegenerate case, we consider Jordan-Ma functor [8]. We compute the images of finite dimensional irreducible Uq(glN)-modules and the quantum coordinate algebra under the Jordan-Ma functor, which are also Y-semisimple representations of ane and double ane Hecke algebras respectively.

Without doubt, an unprecedented number of devices is anticipated in the near future accord- ing to the developing of generations of networks. In the incoming 5th Generation Wireless Systems, 5G in short, which is expected to accommodate billions of wireless devices, the massive multiple-input-multiple-output (MIMO) systems are prominent candidates. In such systems, the acquisition of channel state information (CSI) is of great importance to enhance spectrum efficiency (SE) and energy efficiency (EE) significantly. In the thesis, the perfor- mance of three channel estimation algorithms, Reduced-Rank Least Mean Square (RR-LMS) estimation, Reduced-Rank Recursive Least Square (RR-RLS) estimation and Reduced-Rank (RR-) Kalman Filter estimation, is presented. The optimum parameters of RR-LMS and RR-RLS are provided respectively in a first-order autoregressive (AR(1)) channel model. In a second-order autoregressive (AR(2)) channel, we focus on the α pair’s feasible range where RR-LMS and RR-RLS works. In terms of the RR-Kalman Filter algorithm, we first study the impact of parameter mismatching on estimation performance, then we present a method to estimate the channel fading coefficients α and channel variance σh2 .

Fluorescence Molecular Tomography (FMT) is a novel optical imaging approach which has been investigated for about two decades. Motivations of FMT are low cost, non-ionization, high sensitivity and wide availability of the contrast agents. In vivo FMT imaging allows 3D visualization of molecular activities in the tissues of live small animals. Typical applications of FMT include protease activity detection, cancer detection, bone regeneration imaging, drug delivery study and so on.

Our lab has developed a prototype FMT imaging system with a conical mirror for whole surface measurement. With this prototype imaging system, we studied systematically the performance of the conical mirror-based FMT imaging system. In the FMT imaging system, the object is placed inside a conical mirror and scanned with a line pattern laser that is mounted on the rotary stage. The rotary laser scanning approach was introduced into the imaging system for casting the excitation laser pattern conveniently. After being reflected by the conical mirror, the emitted fluorescence photons pass through central hole of the rotation stage and then the band pass filters in a motorized filter wheel, and finally are collected by a CCD camera. To improve the measurement dynamic range, we applied different neutral density filters. We also tested different measuring modes to compare their effects on the FMT reconstruction accuracy. Experimental results indicate that the conical mirror based FMT system can reconstruct targets with high accuracy after its optimization.

Another optimization of the FMT imaging system is the application of 3D optical profilometry for obtaining the object geometry. We utilized a phase shifting method to extract the mouse surface geometry. Nine fringe patterns with a phase shifting of 2π/9 are projected onto the mouse surface by a pico-projector. The fringe patterns are captured using a webcam to calculate a phase map that is converted to the geometry of the mouse surface with the algorithms. We used a DigiWarp approach to warp a finite element mesh of a standard digital mouse to the measured mouse surface so that the tedious and time-consuming procedure from a point cloud to a finite element mesh is removed. Experimental results indicated that the proposed method is accurate with errors less than 0.5 mm. Phantom experimental results have demonstrated that the proposed new FMT imaging system can reconstruct the target accurately.

Moreover, we applied Monte Carlo raytracing to study the multiple reflection effect of the conical mirror. Conical mirror is a preferred choice for FMT imaging systems because of its ability to collect fluorescent emission photons from the whole surface of the imaged object. However, the conical mirror might have a fraction of photons to be reflected back to the mice surface, including excitation photons and emission photons, which result in inaccurate source positions and measurements errors in the forward modeling and the reconstruction of FMT. Based on Monte Carlo simulations, we have investigated different conical mirror designs to select one design with the minimum multiple reflection. We first generated a multiple reflected photon map for each design of the conical mirror, and then we applied Monte Carlo simulations to model photon propagation inside tissues. Finally, we evaluated the ratio of the multiple reflected photons to the total photons and figured out the optimized size of the conical mirror. Our simulations demonstrated that a single conical mirror configuration could minimize the multiple reflection issues while keep the imaging system setup simple when its small aperture radius is larger than 5 centimeters. We then fabricated a conical mirror with the optimized size and performed phantom experiments with both the optimized conical mirror and the non-optimized one. Phantom experiment results show that noises in the reconstructed images are reduced with the optimized conical mirror and the reconstruction accuracy is improved as well. Other mirror setups, such as pyramid mirror and two-side flat mirror setups for bioluminescence optical tomography and Cerenkov luminescence imaging were studies by simulations as well.

Finally, we performed euthanized mice imaging to validate the optimized FMT imaging system. To reduce the effect of autofluorescence from mice skin, we compared a point laser with the line laser to scan the mouse surface. Soft prior obtained from MicroCT images was utilized to guide the FMT reconstruction. The reconstructed FMT images with both the point laser and the line laser were compared. We found that the line laser performed better than the point laser. Moreover, we applied a demixing method with measurements at four different emission wavelengths and used the demixed measurements at 720 nm as the input for the FMT reconstruction. The soft prior method was adopted as well. Reconstruction results show that the demixing method improves the accuracy of the reconstructed FMT images.

In the future, we will perform mice imaging using a laser at longer wavelengths (such as 780 nm) because the autofluorecence from longer wavelength lasers is weaker. We will incorporate a MicroCT imaging system into the FMT imaging system as well so that the anatomical guidance extracted from CT images can be used to guide the FMT reconstruction precisely and conveniently. We will also perform in vivo mice experiments with the optimized FMT imaging system and evaluate the quality of the reconstruction results.

In this paper, we investigate the optimal spectrum management problem in multiuser frequency selective interference channels. First, a simple pairwise condition for FDMA to be optimal is discovered: for any two among all the users, as long as the normalized cross couplings between them two are both larger than or equal to 1/2, orthogonalization between these two users is optimal for every existing user. Therefore, this single condition applies to achieving all Pareto optimal points of the rate region. Furthermore, not only is this condition sufficient, but in symmetric channels, it is also necessary for FDMA to be always optimal. When the normalized cross couplings are less than 1/2, the optimal spectrum management strategy can be a mixture of frequency sharing and FDMA, depending on users’ power constraints. We first explicitly solve the sum-rate maximization problem in two user symmetric flat channels by solving a closed form equation, providing the optimal spectrum management with a clear intuition as the optimal combination of flat FDMA and flat frequency sharing. Next, we show that this result leads to a primal domain convex optimization formulation for generalizations to frequency selective channels. Finally, we show that all the general optimization problems with n>=2 users and an arbitrary weighted sum-rate objective function in non-symmetric frequency selective channels can be solved by primal domain convex optimization with the same methodology.

Accurately estimating the failure region of rare events for memory-cell and analog circuit blocks under process variations is a challenging task. As the first part of the thesis, author propose a new statistical method, called EliteScope to estimate the circuit failure rates in rare event regions and to provide conditions of parameters to achieve targeted per- formance. The new method is based on the iterative blockade framework to reduce the number of samples. But it consists of two new techniques to improve existing methods. First, the new approach employs an elite learning sample selection scheme, which can con- sider the effectiveness of samples and well-coverage for the parameter space. As a result, it can reduce additional simulation costs by pruning less effective samples while keeping the accuracy of failure estimation. Second, the EliteScope identifies the failure regions in terms of parameter spaces to provide a good design guidance to accomplish the performance target. It applies variance based feature selection to find the dominant parameters and then determine the in-spec boundaries of those parameters. We demonstrate the advantage of our proposed method using several memory and analog circuits with different number of process parameters. Experiments on four circuit examples show that EliteScope achieves a significant improvement on failure region estimation in terms of accuracy and simulation cost over traditional approaches. The 16-bit 6T-SRAM column example also demonstrate that the new method is scalable for handling large problems with large number of process variables.

Interactions among dislocations and solute atoms are the basis of several important processes in metal plasticity. In body-centered cubic (bcc) metals and alloys, low-temperature plastic ﬂow is controlled by screw dislocation glide, which is known to take place by the

nucleation and sideward relaxation of kink pairs across two consecutive Peierls valleys. In alloys, dislocations and solutes aﬀect each others kinetics via long-range stress ﬁeld coupling and short-range inelastic interactions. It is known that in certain substitutional bcc alloys a transition from solute softening to solute hardening is observed at a critical concentration. In the ﬁrst part of this work, we develop a kinetic Monte Carlo model of screw dislocation glide and solute diﬀusion in substitutional W–Re alloys. We ﬁnd that dislocation kinetics is governed by two competing mechanisms. At low solute concentrations, nucleation is enhanced by the softening of the Peierls stress, which dominates over the elastic repulsion of Re atoms on kinks. This trend is reversed at higher concentrations, resulting in a minimum in the ﬂow stress that is concentration and temperature dependent. This minimum marks the transition from solute softening to hardening, which is found to be in reasonable agreement with experiments.

In the second part of this work, we extend our model to interstitial W–O alloys. we report for the ﬁrst time on simulations of jerky ﬂow in W-O as a representative bcc interstitial solid solution. The simulations are carried out in a stochastic framework that naturally captures

rare events in a rigorous manner, enabling the study of solute diﬀusion and dislocation motion concurrently. The model has no adjustable parameters, with all coeﬃcients calculated using ﬁrst principles methods. We ﬁnd that three regimes emerge from the stress-temperature space: one representative of standard solid solution strengthening, another mimicking solute cloud formation, and a third one, where the dynamic interaction of solutes and dislocations results in jerky ﬂow and dynamic strain aging. We show how the symbiosis between quantum mechanical calculations and mesoscopic methods capable of furnishing diﬀusive timescales is a powerful demonstration of the capacities of physical models to explain macroscopic behavior.