Skip to main content
eScholarship
Open Access Publications from the University of California

UC Santa Cruz Electronic Theses and Dissertations

Cover page of Gaussian Process Modeling for Upsampling Algorithms With Applications in Computer Vision and Computational Fluid Dynamics

Gaussian Process Modeling for Upsampling Algorithms With Applications in Computer Vision and Computational Fluid Dynamics

(2020)

Across a variety of fields, interpolation algorithms have been used to upsample low

resolution or coarse data fields. In this work, novel Gaussian Process based methods

are employed to solve a variety of upsampling problems. Specifically three

applications are explored: coarse data prolongation in Adaptive Mesh Refinement

(AMR) in the field of Computational Fluid Dynamics, accurate document image

upsampling to enhance Optical Character Recognition (OCR) accuracy, and fast

and accurate Single Image Super Resolution (SISR). For AMR, a new, efficient,

and “3rd order accurate” algorithm called GP-AMR is presented. Next, a novel,

non-zero mean, windowed GP model is generated to upsample low resolution document

images to generate a higher OCR accuracy, when compared to the industry

standard. Finally, a hybrid GP convolutional neural network algorithm is used to

generate a computationally efficient and high quality SISR model.

Cover page of Hybrid-Parallel Parameter Estimation for Frequentist and Bayesian Models

Hybrid-Parallel Parameter Estimation for Frequentist and Bayesian Models

(2020)

Distributed algorithms in machine learning follow two main flavors: horizontal partitioning, where the data is distributed across multiple slaves and vertical partitioning, where the model parameters are partitioned across multiple machines. The main drawback of the former strategy is that the model parameters need to be replicated on every machine. This is problematic when the number of parameters is very large, and hence cannot fit in a single machine. This drawback of the latter strategy is that the data needs to be replicated on each machine, thus failing to scale to massive datasets.

The goal of this thesis is to achieve the best of both worlds by partitioning both - the data as well as the model parameters, thus enabling the training of more sophisticated models on massive datasets. In order to do so, we exploit a structure that is observed in several machine learning models, which we term as \textit{Double-Separability}. Double-Separability basically means that the objective function of the model can be decomposed into independent sub-functions which can be computed independently. For distributed machine learning, this implies that both data and model parameters can partitioned across machines and stochastic updates for parameters can be carried out independently and without any locking. Furthermore, double-separability naturally lends itself to developing efficient asynchronous algorithms which enable computation and communication to happen in parallel, offering further speedup.

Some machine learning models such as Matrix Factorization directly exhibit double-separability in their objective function, however the majority of models do not. My work explores techniques to reformulate the objective function of such models to cast them into double-separable form. Often this involves introducing additional auxiliary variables that have nice interpretations. In this direction, I have developed Hybrid Parallel algorithms for machine learning tasks that include {\it Latent Collaborative Retrieval}, {\it Multinomial Logistic Regression}, {\it Variational Inference for Mixture of Exponential Families} and {\it Factorization Machines}. The software resulting from this work are available for public use under an open-source license.

Cover page of Essays in International Finance

Essays in International Finance

(2020)

This dissertation studies topics of international finance, such as the use of international reserves as a tool of monetary policies by Emerging Markets (EM), the impact of trade costs in cross-country risk sharing, and the new observed deviations from covered interest rate parity. Each chapter of the dissertation approach one of these three topics.

The first chapter investigates extended Taylor rules and foreign exchange intervention functions in large Emerging Markets (EM), measuring the extent to which policies are designed to stabilize output, inflation, exchange rates and accumulate international reserves. We focus on two large emerging markets - India and Brazil. We also consider the impact of greater capital account openness and which rules dominate when policy conflicts arise. We find that output stabilization is a dominant characteristic of interest rate policy in India, as is inflation targeting in Brazil. Both countries actively use intervention policy to achieve exchange rate stabilization and, at times, stabilizing reserves around a target level tied to observable economic fundamentals. Large unpredicted intervention purchases (sales) accommodate low (high) interest rates, suggesting that external operations are subordinate to domestic policy objectives. We extend the work to Chile and China for purposes of comparison. Chile’s policy functions are similar to Brazil, while China pursues policies that substantially diverge from other EMs.

The second chapter empirically examines whether trade costs impede cross-country consumption risk sharing. Using the data for a large panel of countries over the period 1970-2014, we document that bilateral risk sharing improves once a pair of countries become partners under a regional trade agreement. Moreover, we establish a gravity model of consumption risk sharing by finding that bilateral risk sharing decreases in geographical distance between countries. The effect is more pronounced in the absence of regional trade agreements. These empirical findings support the argument that lifting trade barriers promotes risk sharing across countries. \par

The third and last chapter examines the deviation from covered interest rate parity. Being a new phenomena observed especially after the 2008 Great Financial Crisis (GFC), we measured the deviations in a cross-country setup, and explored possible causality channels for this behavior. The empirical findings in this chapter show a big heterogeneity of causalities per country, limiting the possibilities of a unified theoretical model to explain it.

Cover page of A Field Guide to Exit Zero: Urban Landscape Essay Films, 1921 till Now

A Field Guide to Exit Zero: Urban Landscape Essay Films, 1921 till Now

(2020)

This hybrid theory-practice dissertation advocates for a landscape mode of documentary media-making. Providing case studies from the history of non-narrative city filmmaking, this work breaks new ground by locating this form in the context of environmentalist and social justice concerns. Having been guided by two central questions: what is the subjectivity of a landscape, and what does it mean to say that a landscape has subjectivity?, this research provides an historical overview of non-narrative methodologies for representing urban nature as a collective subject and a collaborative agent. Foregrounding the function of musical and temporal structures, the use of improvisational techniques, and highlights queer strategies of representation, this dissertation expands considerations of the city symphony genre to attend to jazz, feminist, postmodern and environmentalist developments of form. It also considers the lyric role of the acousmatic (off-screen) voice in relationship to the visual landscape and explores how the spoken word inspires productive forms of identification and dis-identification with the visual environment. The practice-based component of the research is Exit Zero: An Atlas of One City Block through Time, is an interactive documentary of a single city block located in central San Francisco. This web-based media artwork presents a long-view of the processes of gentrification and urban transformation. As a synecdoche for the hyper-gentrification of San Francisco, Exit Zero provides a poetic framework in which to explore the multiple dramatic metamorphoses of the city block made famous by Hayes Valley Farm, the temporary community garden built on top of a former freeway exit. Using the interaction metaphors of the compass and the timeline, this work juxtaposes the impacts of government policy and public infrastructure against the forces of anti-freeway activism and community social practice. Visitors are rewarded for their curiosity and encouraged to explore the various states of development and transformation of this block in a non-linear fashion, enacting a collaborative and improvisational relationship to the project’s content and enabling the discovery of uncanny interconnections and poetic rhymes between seemingly disparate time periods.

In arguing for the urgency of validating a landscape mode of media-making that instigates collective forms of identification, this practice-theory dissertation catalyzes a new understanding of landscape as both a collective subject and an collaborative orientation to media-making.

Cover page of Developing Methods for Construction of Population Pedigrees From Low Coverage Sequencing Data

Developing Methods for Construction of Population Pedigrees From Low Coverage Sequencing Data

(2020)

A population pedigree is a graph that captures the totality of the family and genetic histories within a population. While pedigrees contain an abundance of advantageous information for genomic studies, assembling one is often tedious, time consuming, and fraught with error. A combination of highly multiplexed low-coverage sequencing, genotype imputation, and relationship inference software makes it feasible to develop a pedigree cheaply and efficiently. By applying this approach to an experimental admixed Drosophila melanogaster population we developed a dataset that contains genome-wide variants for thousands of individuals in our population. We were also able to confidently identify over one thousand parent-offspring relationships from almost four thousand sequenced samples. However, we were not able to construct a complete pedigree due to overestimates of relatedness resulting from our population’s mixed ancestry. Implementing software that account for population structure could rectify this issue and provide more accurate relationship inference within our population.

Cover page of Power Over Power: The Politics of Energy Transition

Power Over Power: The Politics of Energy Transition

(2020)

Power lines sparking wildfires, destroying homes, and shutting down power across Northern California have generated public fury against Pacific Gas & Electric (PG&E). This bankrupt utility’s grid needs substantial investment and upgrades to be made safe. Yet, the death knell for monopoly utilities may have already sounded; California’s energy transition is underway, and new energy providers and models challenge the incumbent utility’s monopoly control. Political struggles amidst legislators, utilities, new energy providers, and communities have emerged, and will be at the core of energy transitions in California and elsewhere.

To understand the social and political ramifications of future electricity systems, this dissertation explores how energy has been and will be provided and governed. Making the case that electricity is ‘political’, a historical narrative follows a series of concerted political decisions on technology and infrastructure that, over a century ago, created a centralized electricity grid under regulated monopoly utilities. This model is gradually disrupted by decentralized energy resources such as solar rooftop, electric vehicles, charging stations, batteries and microgrids. New technologies see new institutional and governing arrangements, from city-, municipal-, cooperative- to individual control. As blackouts make it clear that energy is a commodity fundamental to our modern livelihoods, climate change and pollution concerns increasingly lead to the consideration of clean affordable energy access as a right. Communities, such as in the case below, increasingly mobilize for this right.

Since its beginning in the 1990s, the Community Choice movement has explored alternative imaginaries of utilizing decentralized energy technology for local self-sufficiency and sustainability. CCAs are a model for local power regulated by local not-for-profit entities acting in their communities’ best interest. CCAs began as a bottom-up and grassroots initiative, transforming into an urban governance movement as the size and number of the organizations grow. The first CCA in California emerged in Marin County in 2010, and today 19 CCAs provide electricity to 10 million state residents. The development of CCAs depicts the politics of energy transitions; competing visions of clean energy models by new energy providers are met with resistance by the incumbent utility and government regulatory agencies. My cases display how imaginaries of more sustainable futures clash in political realms, producing new ecologies of power.

With a bankrupt utility, still nascent CCAs, and the intermittent nature of renewable energy, the state finds itself at crossroads. Through personal interviews with regulators, policymakers, energy companies, activists and local communities, I explore these imaginations and the social and political implications of various energy and governance configurations.

Developments in California have implications elsewhere. Changes in energy provision are intertwined with social and ecological futures, justice and democracy around the world. The concluding chapter, drawing from personal research across both the developed and developing world, puts these elements into dialogue with each other.

Cover page of Soft Leptons, Hard Problems: Searches for the Electroweak Production of Supersymmetric Particles in Compressed Mass Spectra with the ATLAS Detector

Soft Leptons, Hard Problems: Searches for the Electroweak Production of Supersymmetric Particles in Compressed Mass Spectra with the ATLAS Detector

(2020)

Supersymmetry is an attractive extension of the Standard Model of particle physics that posits an additional spacetime symmetry relating fermions and bosons. Phenomenologically, supersymmetry predicts the existence of bosonic superpartners for each of the Standard Model fermions and vice versa. In doing so, many outstanding issues in particle physics can be solved, including the nature of dark matter, the hierarchy problem, and gauge coupling unification.

This dissertation presents searches for the direct electroweak production of supersymmetric states within compressed mass spectra, which generically lead to soft particles in the final state. These searches use 139~$\mbox{fb\(^{-1}\)}$ of $\sqrt{s}=13~\ensuremath{\text{Te\kern -0.1em V}}$ proton--proton collision data collected by the ATLAS experiment at the Large Hadron Collider between 2015 and 2018. Selected events contain two oppositely-charged, same-flavor leptons with low transverse momenta, missing transverse energy, and additional hadronic activity from initial-state radiation.

No statistically significant deviations from the Standard Model predictions are observed in the data. The results are used to set limits on the masses of the supersymmetric states in the context of $R$-parity-conserving simplified models in which the lightest supersymmetric particle is a neutralino arising from nearly mass-degenerate decays of the lightest chargino, the second-to-lightest neutralino, or a slepton. These limits significantly extend existing constraints on well-motivated dark matter scenarios.

Cover page of The A-fibered Burnside Ring as A-fibered Biset Functor in Characteristic Zero

The A-fibered Burnside Ring as A-fibered Biset Functor in Characteristic Zero

(2020)

Let A be an abelian group and let K be a field of characteristic zero containing roots of unity of all orders equal to finite element orders in A. In this thesis we prove foundational properties of the A-fibered Burnside ring functor B^A_K as an A-fibered biset functor over K. This includes the determination of the lattice of subfunctors of B^A_K and the determination of the composition factors of B^A_K. The results of the paper extend results of Coşkun and the author for the A-fibered Burnside ring functor restricted to p-groups and results of Bouc in the case that A is trivial, i.e., the case of the Burnside ring functor over fields of characteristic zero.

Cover page of Optimization of Item Selection with Prediction Uncertainty

Optimization of Item Selection with Prediction Uncertainty

(2020)

Selecting items from a candidate pool to maximize the total return is a classical problem, which is faced by people frequently in real life and also engineers in information technology industry, e.g., digital advertising, e-commerce, web search, etc. For example, web UI designers always try to find the best web design among many candidates to display to users, Google needs to select personalized engaging ads to display to users based on their historical online behaviors. Each of these industries has hundreds of billions of dollars market, which means that even a small performance improvement of item selection efficiency can drive hundreds of millions of dollars growth in the real world.

In these applications, the true value of each item is unknown and can only be estimated from observed historical data. There is a large volume of significant research about building prediction models which are trained on historical data to estimate the item values. Given data volume and computation resource restrictions, engineers choose different models, e.g., deep neutral network, gradient boosting tree, or logistic regression to solve the problems. We will not dive into this area too much in this dissertation. Instead, our focus is how to maximize the total return given these predictions, especially taking into account the prediction uncertainties for the value optimization.

In the large-scale real applications, the candidate pool can be extraordinary large. It is infeasible to pick some items from the pool to get interactive feedback for exploration. Actually, not only is exploration infeasible, but even estimating the value of each item through a complex estimation mode is almost impossible due to the need of real-time response. For example, Apple needs to estimate users’ favorite apps and recommends them to users when they visit Apple store. Google needs to select ads to display to users given a users’ search queries. There are millions of candidates needing to be estimated from prediction models. It is very challenging to support such a large scale of model prediction under the low-latency constraint. Besides that, to have a good prediction accuracy, the models used in industry are getting more and more complex, e.g hidden neurons and layers of deep neural network increases rapidly in real applications, which also increases latency significantly. All of these make it infeasible to evaluate all candidates through one single complex model in large scale application. To solve this problem, engineers usually leverage the cascading waterfall filtering method to filter items sequentially, which means instead of using one complex model to estimate the values of all candidates, multiple stages are adopted to filter out candidates sequentially. For example, a simple model is used in the first stage to estimate candidates’ values for choosing a small subset from all candidates. These selected items are then passed to another stage to be estimated by a more complex model. Intuitively, this cascading waterfall filtering method provides a good trade-off between infrastructure cost and prediction accuracy, which can save computational resources use substantially, and simultaneously select most promising items accurately. However, there is no systematic study about how to efficiently choose the number of waterfalls and how many filtered items in each waterfall. Engineers tune the settings of this system heuristically through personal experience or online experiments, which is very inefficient, especially when the system is dynamic and changes rapidly. In this dissertation, we propose a theoretical framework for the cascading waterfall filtering problem and develop a mathematical algorithm to obtain the optimal solutions. Our method achieves a dramatic improvement in an important real-world application, which adopts cascading water filtering system to select a few items from tens of millions of candidates.

There are also some cases in which the candidate pool is relatively small. For instance, the number of web UI candidates is usually less than one hundred. Then, we are able to explore during item selection process. A typical exploration case is online experimentation, which is widely used to test and select items in real applications. In this situation, we can get interactive feedback to evaluate items. Considering online experiments for example, we usually randomly segments users into several groups, show them different candidates, and then compare the overall performance of each candidate to find the item with the largest value. Among all designs, A/B testing, which usually segments users into two statasitically equivalent groups to measure the difference between two versions of a single variable, is the most popular. For instance, in order to compare the impact of an ad versus another, we need to see the impact of exposing a user to viewing the first ad, and not the second, and then compare with the converse situation. However, a user cannot both see the first ad and not see it. Consequently, we need to create two “statistically equivalent populations” and expose users randomly to one or the other. This method is straightforward. However, the defect of this method is also obvious: to measure both versions, this method cannot expose all users to the best version, which leads to potential value loss. Some multi-armed bandit algorithms, e.g., Randomized Probability Matching (RPM), Upper Confidence Bounds (UCB), whose objective is maximizing the total return in experiment, have been proposed for improvement. However, these methods do not take into account the statistical confidence levels of the final result from the experiment and the corresponding impact on the subsequent item selection in the post-experimental stage. To solve this problem, we develop algorithms to achieve a good trade-off between reducing statistical uncertainty and maximizing cumulative reward, which aims at maximizing the total expected reward of item selection over a total duration, which includes both the current experimental stage and the post-experimental stage. The proposed algorithms demonstrate consistent and statistically significant improvements across different settings, outperforming both A/B testing and multi-armed bandit algorithms significantly.

Cover page of The Collaborative Regulation of Cortical Neuron Subtype Specification by TLE4 and FEZF2

The Collaborative Regulation of Cortical Neuron Subtype Specification by TLE4 and FEZF2

(2020)

Projection neuron subtype identities in the cerebral cortex are established through the expression of pan-cortical and subtype-specific effector genes, which execute terminal differentiation programs that bestow neurons with a glutamatergic neuron phenotype and subtype-specific morphology, physiology, and axonal projections. Whether pan-cortical glutamatergic and subtype-specific characteristics are regulated by the same genes or controlled by distinct programs remains largely unknown. Here, I show that the transcriptional corepressor, TLE4, is expressed specifically in postmitotic corticothalamic projection neurons, where it functions to regulate the molecular, dendritic, and electrophysiological characteristics unique to corticothalamic neurons. I also demonstrate that TLE4 directly interacts with the forebrain embryonic zinc finger protein, FEZF2, within corticothalamic projection neurons to facilitate the transcriptional repression of subcerebral projection neuron identity. Through the utilization of our novel Fezf2-Bac-EnR transgenic mouse line, I was then able rescue the molecular defects of corticothalamic neurons in the cortex of Tle4 knockout mice and restore the dendritic and electrophysiological characteristics of theses neurons. Overall, the work presented provides an in-depth investigation into the transcriptional regulation of subtype-specific cortical projection neuron identity by Tle4 and Fezf2, thereby contributing novel insight into our current understanding of mammalian cortical development.