- Main
An Assortment of Sorts: Three Modern Variations on the Classic Sorting Problem
- Devanny, William Eric
- Advisor(s): Eppstein, David;
- Goodrich, Michael T
Abstract
Sorting is one of the most well studied algorithmic problems in Computer Science. It is a fundamental building block in many other algorithms. In this dissertation, we consider several variants of the classical sorting problem all motivated by modern challenges or technologies. We present algorithms to solve these problem variants and provide lower bounds when possible.
The online list labeling problem attempts to maintain integer labels for a dynamic ordered list. As new elements are inserted, old elements may need to be relabeled to make room in the label space. Previous work has looked at minimizing the total number of relabels that need to be performed. However we analyze the version of the problem where the goal is to minimize the maximum number of times any one element is relabeled. We call this the online house numbering problem. This problem is motivated by the modern solid-state memories which have a limited write life. We provide two solutions to the house numbering problem: one that comes within a logarithmic factor of the optimal label space size with optimal maximum relabelings and one that has optimal label space size, but is a logarithmic factor off of the optimal maximum relabelings.
Sorting can also mean to split a set of elements into groups of similar elements. Cryptographic handshakes, where two parties securely identify if they belong to a privileged group, motivate studying this form of sorting that we call equivalence class sorting. Instead of sorting with a less-than operator, our goal is to use an equivalence relation operator to group a set of elements into their equivalence classes. We prove tight lower bounds that match the run time of previously known algorithms as well as provide algorithms for performing equivalence class sorting in several models of parallel computation.
Classical sorting algorithms output the sorted order for a given input list. When the data is continually changing or "evolving", the output of a classical algorithm cannot be guaranteed to be accurate. So we consider a new model for algorithms called the evolving data model. In this model, every time a comparison is performed, two elements that are adjacent in the underlying order are swapped. No algorithm can ever compute the exact correct order of the elements in such an evolving list. Instead the goal is to, over time, converge to be as close to the correct order as possible. We show that simply repeatedly running insertion sort achieves the best possible O(n) inversions relative to the underlying order with exponentially high probability.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-