This thesis presents a search for $WW$ and $WZ$ resonances using data from $pp$ collisions at $\sqrt{s}=13$ TeV using the ATLAS detector, corresponding to an integrated luminosity of 139 fb$^{-1}$. Diboson resonances are predicted in a number of Standard Model (SM) extensions, such as Extended Gauge Models, and extra dimensional models. This search looks for resonances where one $W$ boson decays leptonically and the other $W$ or $Z$ boson decays hadronically. This search is sensitive to diboson resonance production via vector-boson fusion (VBF), quark-antiquark annihilation (DY), and gluon-gluon fusion (ggF) mechanisms. No significant excess of events is observed with respect to the SM backgrounds, and constraints on the masses of new $W'$, $Z'$, and bulk-RS Gravitons are extended up to 3.7 TeV, depending on the model. As the dominant backgrounds in this search contain gluon-initiated jets, classifying jets as quark-initiated or gluon-initiated increases the sensitivity of this analysis to new physics. Towards this end, this thesis provides a calibrated quark-gluon tagger based on the multiplicity of charged particles within a jet.

## Type of Work

Article (461) Book (0) Theses (3) Multimedia (0)

## Peer Review

Peer-reviewed only (463)

## Supplemental Material

Video (0) Audio (0) Images (0) Zip (0) Other files (0)

## Publication Year

## Campus

UC Berkeley (450) UC Davis (0) UC Irvine (455) UCLA (9) UC Merced (0) UC Riverside (409) UC San Diego (13) UCSF (2) UC Santa Barbara (2) UC Santa Cruz (15) UC Office of the President (3) Lawrence Berkeley National Laboratory (454) UC Agriculture & Natural Resources (0)

## Department

School of Medicine (12) Donald Bren School of Information and Computer Sciences (3) Research Grants Program Office (RGPO) (3)

## Journal

## Discipline

Physical Sciences and Mathematics (3)

## Reuse License

BY - Attribution required (450)

## Scholarly Works (464 results)

We present a search for new exotic physics in pp collisions at √s = 8 TeV recorded by the ATLAS detector at the Large Hadron Collider (LHC) at CERN. Our search is focusing on the production of four top-quark final states. Specifically, we are looking for events with two or more top-quarks produced with very high transverse momentum, tagged using jet substructure variables. Events with at least two top-tagged jets are also required to have at least two b-tagged jets, with a further requirement that one of the b-tagged jets lie outside the conical radius of the top-tagged jets. Finally, we look in events that have a large amount of total transverse momentum (HT) that is optimized for several potential new signal models (low HT and high HT). In a data sample of 20.3 fb^{-1} we measured an expected background of 13.04 ± 3.150^{+3.925}_{-4.751} events in the low-HT channel and measured an expected background of 5.024 ± 1.918^{+1.753}_{-2.971 events in the high-HT channel. We expect to set a limit on the Kaluza-Klein mass scale mKK > 1.06 TeV at the 95% CL assuming no excess of or deficit of events are seen in the data.}

Supersymmetry (SUSY) is an extension of the Standard Model that predicts a boson

(fermion) partner for each fermion (boson) in the Standard Model. Weak-scale SUSY is

attractive for reasons like improving gauge coupling unification, reducing fine-tuning in

the Higgs sector and providing a dark matter candidate. This thesis presents a dedicated

search for direct production of new, colorless, weak-scale states with compressed mass

spectra in final states characterized by soft visible decay products. This analysis uses

pp collisions at √s = 13 TeV at the Large Hadron Collider and collected by the ATLAS

experiment during 2015 and 2016 corresponding to 36.1 fb−1 of integrated luminosity.

This analysis selects events with two soft electrons or muons, an intermediate amount of

missing transverse momentum (Emiss), and a hard jet. Backgrounds with two prompt T

leptons are estimated with Monte Carlo simulation, while reducible backgrounds are estimated with a mix of Monte Carlo and data-driven methods. Results are consistent with Standard Model expectations and used to put limits on compressed supersymmetric states. Limits are extended on compressed electroweak SUSY model for the first time since the Large Electron Positron Collider (LEP).

Non-uniform memory and network access is a major source of performance degradation in SIMD supercomputers. We investigate the problem oí finding general XOR-schemes to minimize memory conflicts and network contention for accessing arrays with arbitrary data templates, defined by template bases.

The XOR-matrix is defined so that each column corresponds to a distinct vector in the union of all templates bases. A restriction of the XOR-matrix to a given template is formed by concatenation oí the columns corresponding to the template basis. We prove that a necessary and sufficient condition for conflict-free and network-contention-free access for the Baseline network is that certain sub-matrices of every template's restricted matrix be non-singular. A new characterization of the baseline network and XOR-matrices is proposed. Finding an XOR-matrix far accessing arbitrary templates is proved to be an NP-complete problem.

To minimize memory and network contention, a heuristic algorithm is proposed for finding XOR-matrices. The algorithm determines successive rows, from the bottom up. Given the previous row, the algorithm determines: 1) the constraints required by each template's restricted matrix 2) and the row solution by solving a set of simultaneous equations. To avoid backtracking, a randomized approach is used. The time complexity of the heuristic is O(tpn^2), where t, 2^P, and n, are the number of templates, the number of processors, and the number of distinct vectors of template bases, respectively. Evaluation shows that the proposed XOR-schemes significantly reduce memory and network contention compared to interleaving and XOR-schemes that are optimized for a set of static reference reference templates.

The serialization of memory accesses is a major limiting factor in high performance SIMD computers. For these machines, the data templates that are accessed by a program can be perceived. by the compiler, and therefore, the design of conflict-free storage schemes may dramatically improve performance.

The problem of finding storage schemes, with minimum hardware requirements, for accessing a set of arbitrary templates is proved to be NP-complete. To design cost-effective storage schemes, we introduce two parameters: the number of 1's in the storage matrix (affecting hardware complexity) and the access frequency of each template. Heuristics are proposed to find storage schemes with minimum hardware (Perfect Schemes) but without enforcing a high degree of conflict reduction. Another heuristic is proposed to augment perfect storage schemes by using minimum additional hardware in order to reduce the degree of conflict (Semi-Perfect Schemes).

Experimental evaluation is carried out using a Monte Carlo simulation. Performance of the proposed heuristics is compared to solutions obtained using branch-and-bound search. Results show that perfect-schemes may deviate on the average by 20% from the optimum access time in the case of 10 arbitrary templates and 16 memories. However, semi-perfect schemes lead to dramatic reduction of the degree of conflict compared to perfect-schemes. The proposed heuristic storage outperforms row-major interleaving and row-column-diagonals storage. The time complexity of the proposed heuristics is O(p(t + n) + n^2t), where t, 2^P, and n, are the number of templates, the number of processors, and the number of distinct vectors of the template bases, respectively.

An ordered minimal perfect hash table is one in which no collisions occur among a predefined set of keys, no space is unused, and the data are placed in the table in order. A new method for creating ordered minimal perfect hashing functions is presented. The method presented is based on a method developed by Fox, Heath, Daoud, and Chen, but it creates hash functions with representation space requirements closer to the theoretical lower bound. The method presented requires approximately 10% less space to represent generated hash functions, and is easier to implement than Fox et al's. However, a higher time complexity makes it practical for small sets only (< 1000).