Skip to main content
eScholarship
Open Access Publications from the University of California

UC Merced

UC Merced Electronic Theses and Dissertations bannerUC Merced

UC Merced Electronic Theses and Dissertations

Cover page of Detector readout of an Analog Quantum Simulator

Detector readout of an Analog Quantum Simulator

(2021)

An important question in quantum simulation is the certication of the quantumsimulators with proper readout. We examine how a detector's correlator changes when coupled to a quantum simulator using a diagrammatic technique. From the correlation functions calculated from the diagrammatic technique, we can determine whether or not reliable detection of the simulator's correlator can be achieved. When reliable detection is not possible due to detector back-action, we examine the situations when the back-action can be negligible. In particular, we study a cavity detector coupled to a Transverse Field Ising Model. We use a similar diagrammatic technique to study the interaction between a cavity and a qubit in the ultrastrong coupling regime. This cavity-qubit system is of importance in quantum computing and is a fundamental model in cavity QED. Ultrastrong coupling strength enables novel approaches for quantum logic operations. Our approach provides a fresh perspective on calculating the transmission spectra and the imacpt of the ultrastrongly coupled cavity on the qubit behavior.

Cover page of New Online and Approximate Scheduling Algorithms

New Online and Approximate Scheduling Algorithms

(2020)

This dissertation focuses on the design and analysis of approximation and online algorithms for scheduling problems arising in large datacenters and cloud computing environments.The recent advancement of science and engineering crucially relies on computing platforms processing large data sets, and modern datacenters provide such platforms.

We address four different scheduling settings motivated by modern computing environments; speed scaling setting, co-flow setting, batch scheduling setting, and unrelated machines setting.

Speed scaling setting: Modern processors typically allow dynamic speed-scaling offering an effective trade-off between high throughput and energy efficiency. In a classical model, a processor/machine runs at speed s when consuming power s^α where α>1 is a constant. We study the problem of completing all jobs before their deadlines non-preemptively with the minimum energy on parallel machines with dynamic speed-scaling capabilities. This problem is crucial in datacenters that are estimated to use energy as much as a city.

Co-flow setting: Coflow has recently been introduced to abstract communication patterns that are widely observed in the cloud and massively parallel computing frameworks. Coflow consists of several flows that each represent data communication from one machine to another. A coflow is completed when all of its flows are completed. We consider coflow for the objective of maximizing partial throughput. This objective seeks to measure the progress made for partially completed coflows before their deadlines.

Batch scheduling setting: This setting is used to model client-server systems like multiuser operating systems, web servers, etc. In this setting, there is a server that stores different pages of information and receives requests from clients over time for specific pages. The server can transmit at most one page at each time to satisfy a batch of requests for that page, up to a certain capacity. We study the maximum flow time minimization problem in this setting, which is a capacitated version of broadcast scheduling.

Unrelated machines setting: The unrelated machines setting is one of the most general and versatile scheduling models. It captures the heterogeneity of jobs and machines, which is widely observed in cloud computing environments and server clusters. We consider the classic scheduling problem of minimizing total weighted completion time on unrelated parallel machines. Machines are unrelated in the sense that each job can have different processing times on different machines.

We use the standard worst-case analysis to measure the quality of the algorithms we develop for the underlying scheduling problems. Approximation ratio and competitive ratio are two popular quantities that measure the worst-case performance of offline and online algorithms, respectively. An offline algorithm is said to be α-approximation if its objective is within a multiplicative factor α of the optimal scheduler's objective for any input instance. Roughly speaking, an algorithm with a small approximation ratio performs well compared to the optimal scheduler, even on a worst-case input. To evaluate the worst-case performance of an online algorithm, we compute the worst ratio between the objective value of the online algorithm and the optimal offline scheduler's objective for any input instance.

Cover page of Multi-stage endothelial differentiation and expansion of human pluripotent stem cells

Multi-stage endothelial differentiation and expansion of human pluripotent stem cells

(2020)

Human endothelial cells (ECs) have prospects for a wide range of clinical applications including cell-based therapies and tissue engineering and, hold tremendous potential for research in the fields of vascular development, drug discovery and disease modelling. Efficient and robust induction of ECs from human pluripotent stem cells (hPSCs) will serve as a renewable and indefinite source. However, distinct embryonic stem cell (hESC) and induced pluripotent stem (iPSC) cell lines respond differentially to the same microenvironmental signals. Developing an optimized differentiation methodology robust across multiple hPSC lines, including hiPSC lines derived from autologous patient specific cells remains a challenge in the field. We demonstrate a chemically defined multi-stage EC differentiation process across multiple hPSC lines. This method can generate highly purified populations of actively proliferating VE-Cadherin+ functional ECs in 30 days. There are a few published methods for efficient derivation large number of endothelial progenitor cells (EPC) within a week, but their maturation to definitive EC is tough, taking longer and requiring additional purification.

Cover page of Nanotribology of MoS2 Investigated via Atomic Force Microscopy

Nanotribology of MoS2 Investigated via Atomic Force Microscopy

(2020)

The potential use of two-dimensional (2D) materials as solid lubricants in micro-and nano-scale mechanical systems draws significant attention, mainly due to the fact that liquid based lubrication schemes fail at such small length scales. Within this context, the lamellar material molybdenum disulfide (MoS2), in the form of a single or few layers, emerges as a promising candidate for the solid lubrication of small-scale mechanical systems.

Motivated as above, this thesis focuses on the nanotribological properties of mechanically exfoliated MoS2, explored via state-of-the-art atomic force microscopy (AFM) experiments. First, the dependence of friction force on sliding speed is investigated for single-layer and bulk MoS2 samples. The results demonstrate that (i) friction forces increase logarithmically with respect to sliding speed, (ii) there is no correlation between the speed dependence of friction and the number of layers of MoS2, and (iii) changes in the speed dependence of friction can be attributed to changes in the physical characteristics of the AFM probe. The direction dependence (i.e. anisotropy) of friction on MoS2 is studied next. In particular, high-resolution AFM measurements conducted by our collaborators at McGill University lead to the direct imaging of atomic-scale ripples on few-layer MoS2 samples, allowing to explain the various symmetries for friction anisotropy that are observed in our experiments as a function of scan size. Finally, the nanotribological properties of Re-doped MoS2 are studied, revealing a surprising, inverse dependence of friction force on number of layers, in contradiction with the seemingly universal trend of decreasing friction with increasing number of layers on 2D materials. Attempts are made to uncover the physical mechanisms behind this striking observation by way of roughness and adhesion measurements.

In summary, the results reported in this thesis contribute to the formation of a comprehensive, mechanistic understanding of the nanotribological properties of MoS2 in particular, and 2D materials, in general. While the speed dependence and anisotropy results are relatively self-contained, further work needs to be conducted in order to explain the inverse layer-dependence of friction observed on Re-doped MoS2.

Ku-Mo: Popular Culture and the Impossible Sovereignty of Taiwan

(2020)

This project examines the ways Taiwan’s contested sovereignty pokes holes in dominant understandings of what it means to be a sovereign nation based on the discourses of Taiwan that appear in transnational popular culture and media. As a result of Taiwan’s role as a global economic center, the traces Taiwan leaves behind in transnational media, and the scandals they garner, reflect the larger dynamic of Taiwan as a constant problem and yet a valuable commodity for powerful nation-states. Taiwan is simultaneously a site of transnational profit for states like China, as well as a rhetorical threat to a One China Policy. This liminality represents a kind of impossible sovereignty—one that is politically illegible, but functional nonetheless. This form of sovereign absurdity for Taiwan is perhaps best encapsulated by the Hokkien term ku-mo. Ku-mo is a transliteration of a 台語 colloquialism 龜毛, which describes someone who is slow or high maintenance to the point of inconveniencing others. There is a second definition to龜毛, one grounded in a Buddhist idiom that represents an absurdity—something that does not and ought not to exist. I argue that Taiwan’s impossible sovereignty can be described as ku-mo in the sense that it is conceptualized both as an impossibility and an inconvenience that disrupts otherwise uniform and smooth processes of international trade and media production.

Taiwan’s contemporary liminal sovereignty presents a profound problem for the very nation-states that attempt to erase it, and as a result of Taiwan’s role as a center of transnational capital, the debate over Taiwanese sovereignty is contested and mediated transnationally through culture and the culture industry of multiple nation-states. The relevant question of Taiwan’s sovereignty for us is not, “Is Taiwan sovereign?” but rather “What does the discourse surrounding the question of Taiwanese sovereignty accomplish and how does it function?” Examining sovereignty as a discourse and a practice allows us to explore the ways conceptions of sovereignty are defined by documentation, institutions, and bureaucracy that are perpetually attempting to contain that which cannot be contained.

  • 1 supplemental video
Cover page of GPU Rasterization Methods for Path Planning and Multi-Agent Navigation

GPU Rasterization Methods for Path Planning and Multi-Agent Navigation

(2020)

In this dissertation I present new GPU-based approaches for addressing path planning and multi-agent navigation problems. The proposed methods rely on GPU rasterization techniques to construct navigation structures which allow us to address these problems in novel ways.

There are three main contributions described in this document.

The first is a new method for computing Shortest Path Maps (SPMs) for generic 2D polygonal environments. By making use of GPU shaders an approach is presented to implement the continuous Dijkstra’s wavefront propagation method, resulting in an SPM representation in a GPU’s buffer which can efficiently give a globally optimal shortest path between any point in the environment and the considered source point. The proposed shader-based approach also allows several extensions to be incorporated: multiple source points, multiple source segments, and the incorporation of weights that can alter the wavefront propagation in order to model velocity changes at vertices. These extensions allow SPMs to address a large range of real-world situations.

The second contribution addresses the global coordination of multiple agents flowing from source to sink edges in a polygonal environment. The same GPU-based SPM methods are extended to compute a Continuous Max Flow in the input environment, which can be used to guide agents through the environment from source edges to sink edges, leading to a flow representation stored in the frame buffer of the GPU. A method for extracting flow lanes respecting clearance constraints is also presented, achieving the maximum possible number of lanes to route agents across an environment without ever creating bottlenecks.

In order to address decentralized autonomous agents, the third contribution presents a new method for dynamically detecting and representing in SPMs regions where agents are bottlenecked. The incorporation of weighted barriers are proposed to model the corresponding time delays in corridors of the SPMs, in order to provide agents with alternative paths avoiding bottlenecks. In this way, a novel type of SPM is defined, providing optimal solutions from weights which reflect dynamic delays in the corridors of the environments.

The methods proposed in this dissertation present novel approaches for addressing optimal paths and agent distribution in planar environments. Given the continuous development of high-performance GPUs, the proposed methods have the potential to open new avenues for the development of efficient navigation algorithms and representations.

Cover page of Probabilistic Constrained Decision Making for Robots Exploring, Mapping, and Navigating Indoor Environments.

Probabilistic Constrained Decision Making for Robots Exploring, Mapping, and Navigating Indoor Environments.

(2020)

Robots are becoming more of a part of our daily lives. They have become an extension of some our human capabilities and there is a need to develop control algorithms that contribute to the successful deployment of these machines to navigate, explore and map indoor human environments. These robots and their actions, despite our effort to make them as predictable as possible, show stochastic behaviors, as well as motion and sensing uncertainties. We leverage the use of Constrained Markov Decision Processes (CMDP) to balance multi-cost problems with constraints, under the premise of having multiple possible sources of uncertainties. This dissertation engages in solving some of these problems in the following chapters.

Initially, we highlight some of the theoretical background about Markov Decision Process (MDP) and its extension the CMDP. From this point we deal with the problem of multiple robots visiting multiple targets, while we fix temporal and failure probability constraints. We present our solution to expand the state space following a binary sequence that represents successful observations of each of the targets. All this is classified as the rapid deployment problem, which we define and solve for a team of robots.

Closing the gap between reality and theory, we implement a stochastic model that recreates the motion primitives from a robot. We proceed to use these modeled primitives to create modeled trajectories and extract transition probabilities from them. These transition probabilities characterize some of the robot’s behavior and we use them with our formulation of a CMDP. Then we calculate a navigation policy to traverse some real scenarios.

We create and implement a new spatial model dubbed Oriented Topological Semantic Map (OTSM). This new type of map can be built in run-time, and together with a CMDP, we assign actionable temporal deadlines to the robot executing an exploration task. We open-sourced a ROS framework that can be downloaded and used to reproduce our results and we published the first Reproducible Article or R-article in robotics. Consequently, we implement an OTSM by grouping an Orientation System (OS), an Intersection Detection System (IDS), and a Labeling System (LS), using odometry, accelerometers, a LIDAR, and a residual neural network resNet, to extract the orientation, the topology, and the semantics of an indoor environment.

In the last part of this dissertation we propose a new algorithm to merge together pieces of OTSMs when a group of robots have the task to explore an unknown environment. Then they combine their local maps into a global map. Our solution was inspired from research in cognitive science that focused on object recognition. Applying this theory, we create a two-stage method to compare vertices in different OTSMs and measure their resemblance.

Characterization and Modeling of Error Resilience in HPC Applications

(2020)

HPC systems are widely used in industrial, economical, and scientific applications, and many of these applications are safety- and time-critical. We must ensure that the application execution is reliable, and the scientific simulation outcome is trustworthy. As HPC systems continue to increase computational power and size, next-generation HPC systems are expected to incur a higher failure rate than contemporary systems. How to ensure scientific computing integrity in the presence of an increasing number of system faults is one of the grand challenges (also known as the resilience challenge) for large-scale HPC systems.

This dissertation focuses on characterizing, modeling, developing, and advancing resilience strategies and tools in HPC systems to allow scientific applications to survive system failures better. In particular, in this dissertation we systematically characterize HPC applications to find reasons accounting for nature error resilience of HPC applications by tracking error propagation and also by capturing application properties according to their significance to application error resilience using machine learning. We further model application error resilience at different granularities, including individual data objects, small computation kernels, and the whole application. Also, we develop an error resilience benchmark suite to comprehensively evaluate and comparatively study different error resilience designs in the presence of MPI process or node failures. With the knowledge learned from characterization and modeling of application error resilience, we propose a collection of new methodologies and tools that can guide HPC practitioners to find the most effective and efficient error resilience designs, provide helps to advance effectiveness and efficiency of the existing error resilience designs, and build inspiration foundations to future error resilience designs aiming at higher effectiveness and efficiency of HPC systems.

Cover page of Asymptotic analysis of boundary integral equations of regions with high curvature

Asymptotic analysis of boundary integral equations of regions with high curvature

(2020)

In this project we investigate the behavior of layer potentials in regions of high curvature in two dimensions, in particular an asymptotically collapsing ellipse. Layer potentials arise in boundary integral methods and offer several advantages numerically but can be affected by regions of high curvature. Such phenomena appear in slender body theory. In this thesis we propose two approaches to address this challenge. We propose a modification of quadrature rules using asymptotic methods, and a spectral method when one can find the analytic Fourier coefficients. We apply these techniques to several problems: Laplace's problems (interior Dirichlet and exterior Neumann), and a scattering problem.

Cover page of Studies of Monte Carlo Methodology for Assessing Convergence, Incorporating Decision Making, and Manipulating Continuous Variables

Studies of Monte Carlo Methodology for Assessing Convergence, Incorporating Decision Making, and Manipulating Continuous Variables

(2020)

In this dissertation, I explore challenges that simulation researchers face. First, I argue that compared to using inferential models, assessing simulation convergence is the superior method to determine the number of dataset replications to use when conducting a simulation. I devise a novel way of assessing simulation convergence with rounded cumulative means and apply it to four examples alongside a more conventional, analytical technique. Second, I highlight the importance of incorporating statistical decisions into the simulation process. I illustrate that with examples of decisions surrounding model selection, convergence of an individual model’s estimates, and modifications made during preliminary statistical analyses (e.g., due to outliers or a perceived assumption failure). Third, I compare the use of continuous manipulated variables that have been discretized into levels to those generated along the full continuum of possible values. Based on linear and non-linear simulation examples, discretized manipulated variables appear more effective than continuous variables, mainly due to the taxing process of establishing simulation convergence along a continuum as opposed to a point estimate.

  • 1 supplemental PDF