The UC Davis College of Engineering is comprised of 7 Academic Departments including: Biological & Agricultural, Biomedical, Chemical and Materials Science, Civil and Environmental, Computer Science, Electrical and Computer, and Mechanical and Aerospace Engineering.
Resilience is a relatively new concept in computer security that is continuing to evolve. The research community has not settled on an exact definition for resilience, but most agree that this security property should include resistence to attack, damage recovery, and the ability for a system to learn and better resist such an attack in the future. Much of the existing research has focused on resilience solely in terms of availability, or in defining metrics to describe and compare the resilience of systems. The goal of this dissertation is to not only explore the possibility of a more general framework for resilience, but to also analyze the effectiveness of methods and technologies that can be used to measure and provide resilience.
The dissertation begins by covering common elements of computer security, providing exam- ples, addressing vulnerabilities and exploits, and suggesting potential solutions. In later sections, we examine the feasibility of the proposed solutions. Alternative solutions are compared in the context of a network’s priorities, abilities, and dependencies. Our work is inspired by the need for better security metrics in order to quantitatively evaluate and compare different systems and networks. A robust set of metrics that describe the security and recovery features of systems can provide a foundation for at least two key concepts: a network resilience communication protocol and a resilience testing framework. The communication protocol could help network administrators maintain and improve the resilience of their networks. It would facilitate communication between systems on the network so that potential threats can be quickly identified and so that changes can be made autonomously to reduce the impact of a threat without the need for human intervention. The testing framework can be used to test a system’s resilience to specific attacks, packaged as portable modules. Network administrators can use data and visualization results of this framework to make informed decisions about how to improve their resilience. The communication protocol may be able to analyze results from the testing framework to improve a network’s resilience. The goal of these two projects would be to develop solutions that can improve the resilience of networks in general, taking into account their size, security requirements, and critical functions.
Tanglegrams are a tool to infer joint evolution of species. Tanglegrams are widely used in ecology to study joint evolution history of parasitic or symbiotically linked species. Visually, a tanglegram is a pair of evolutionary trees drawn with the leaves facing at each other. One species at the leaf of one trees is related ecologically to a species at a leaf of another tree. Related species from the two trees are connected by an edge. The number of crossings between the edges joining the leaves indicate the relatedness of the trees. Earlier work on tanglegrams considered the same number of leaves on both the trees and one edge between the leaves of the two trees. In this paper we consider multiple edges from a leaf in the trees. These edges correspond to ecological events like duplication, host switching etc. We generalize the definition of tanglegrams to admit multiple edges between the leaves. We show integer programs for optimizing the number of crossings. The integer program has an XOR formulation very similar to the formulation for the tanglegrams. We also show how the ideas for distance minimization on tanglegrams can be extended for the generalized tanglegrams. We show that the tanglegram drawings used in ecology can be improved to have fewer crossings using our integer programs.
In this paper, we present our GPU implementation of the quotient filter, a compact data structure designed to implement approximate membership queries. The quotient filter is similar to the more well-known Bloom filter; however, in addition to set insertion and membership queries, the quotient filter also supports deletions and merging filters without requiring rehashing of the data set. Furthermore, the quotient filter can be extended to include counters without increasing the memory footprint. This paper describes our GPU implementation of two types of quotient filters: the standard quotient filter and the rank-and-select-based quotient filter. We describe the parallelization of all filter operations, including a comparison of the four different methods we devised for parallelizing quotient filter construction. In solving this problem, we found that we needed an operation similar to a parallel scan, but for non-associative operators. One outcome of this work is a variety of methods for computing parallel scan-type operations on a non-associative operator.
For membership queries, we achieve a throughput of up to 1.13 billion items/second for the rank-and-select-based quotient filter: a speedup of 3x over the BloomGPU filter. Our fastest filter build method achieves a speedup of 2.1--3.1x over BloomGPU, with a peak throughput of 621 million items/second, and a rate of 516 million items/second for a 70% full filter. However, we find that our filters do not perform incremental updates as fast as the BloomGPU filter. For a batch of 2 million items, we perform incremental inserts at a rate of 81 million items/second -- a 2.5x slowdown compared to BloomGPU's throughput of 201 million items/second. The quotient filter's memory footprint is comparable to that of a Bloom filter.