Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Systems Research (JSys) is a diamond open-access journal covering all areas of computer systems research.

Articles

[Solution] Algorithmic Heap Layout Manipulation in the Linux Kernel

To evaluate the severity of a security vulnerability a security researcher usually tries to prove its exploitability by writing an actual exploit. In the case of buffer overflows on the heap, a necessary part of this is manipulating the heap layout in a way that creates an exploitable state, usually by placing a vulnerable object adjacent to a target object. This requires manual effort and extensive knowledge of the target. With a target as complex as the Linux kernel, this problem becomes highly non-trivial. At the current time, there has been little research in terms of employing algorithmic solutions for this. In this work, we present Kernel-SIEVE, a framework for evaluating heap layout manipulation algorithms that target the SLAB/SLUB allocator in the Linux kernel. Inspired by previous work that targets user-space allocators [33–35] it provides an interface for triggering allocations/deallocations in the kernel and contains a feedback loop that returns the resulting distance of two target objects. With this, we create the (to our knowledge) first performance benchmarks for heap layout manipulation algorithms in the Linux kernel. We present and evaluate two algorithms: A pseudo-random search, whose performance serves as a baseline, and KEvoHeap, a genetic algorithm based on Heelan’s EvoHeap [33, 35]. We show that KEvoHeap is successful at creating the desired heap layout in all test cases and also surpasses the user-space performance benchmarks of EvoHeap. Finally, we discuss the challenges of applying these kinds of algorithms in real-world scenarios and weigh different possible approaches to tackle the problems that arise. Our research results are publicly available on GitHub [43].

[Problem] Cerberus: Minimalistic Multi-shard Byzantine-resilient Transaction Processing

To enable scalable resilient blockchain systems, several powerful general-purpose approaches toward sharding such systems have been demonstrated. Unfortunately, these approaches all come with substantial costs for ordering andexecution of multi-shard transactions.

In this work, we ask whether one can achieve significantcost reductions for processing multi-shard transactions by limiting the type of workloads supported. To initiate the study of this problem, we propose CERBERUS, a family of minimalistic primitives for processing single-shard and multi-shard UTXO-like transactions. The first CERBERUS variant we propose is core-CERBERUS (CCERBERUS). CCERBERUS uses strict UTXO-based environmental requirements to enable powerful multi-shard transaction processing with an absolute minimum amount of coordination between shards. In the environment we designed CCERBERUS for, CCERBERUS will operate perfectly with respect to all transactions proposed and approved by well-behaved clients, but does not provide any other guarantees.

To illustrate that CCERBERUS -like protocols can also be of use in environments with faulty clients, we also demonstrate two generalizations of CCERBERUS, optimistic-CERBERUS and resilient-CERBERUS, that make different tradeoffs in complexity and costs when dealing with faulty behavior and attacks. Finally, we compare these three protocols and show their potential scalability and performance benefits over state-of-the-art general-purpose systems. These results underline the importance of the study of specialized approaches toward sharding in resilient systems.

[Solution] Byzantine Cluster-Sending in Expected Constant Cost and Constant Time

Traditional resilient systems operate on fully-replicated fault-tolerant clusters, which limits their scalability and performance. One way to make the step towards resilient high-performance systems that can deal with huge workloads is by enabling independent fault-tolerant clusters to efficiently communicate and cooperate with each other, as this also enables the usage of high-performance techniques such as sharding. Recently, such inter-cluster communication was formalized as the Byzantine cluster-sending problem. Unfortunately, existing worst-case optimal protocols for cluster-sending all have linear complexity in the size of the clusters involved.

In this paper, we propose probabilistic cluster-sending techniques as a solution for the cluster-sending problem with only an expected constant message complexity, this independent of the size of the clusters involved and this even in the presence of highly unreliable communication. Depending on the robustness of the clusters involved, our techniques require only two-to-four message round-trips (without communication failures). Furthermore, our protocols can support worst-case linear communication between clusters. Finally, we have put our techniques to the test in an in-depth experimental evaluation that further underlines the exceptional low expected costs of our techniques in comparison with other protocols. As such, our work provides a strong foundation for the further development of resilient high-performance systems.

[Solution] Mason: Scalable, Contiguous Sequencing for Building Consistent Services

Some recent services use a sequencer to simplify ordering operations on sharded data. The sequencer assigns each operation a multi-sequence number which explicitly orders the operation on each shard it accesses. Existing sequencers have two shortcomings. First, failures can result in some multi-sequence numbers never being assigned, exposing a non-contiguous multi-sequence, which requires complex scaffolding to handle. Second, existing implementations use single-machine sequencers, limiting service throughput to the ordering throughput of one machine.

We make two contributions. First, we posit that sequencers should expose our new contiguous multi-sequence abstraction. Contiguity guarantees every sequence number is assigned an operation, simplifying the abstraction. Second, we design and implement MASON , the first system to expose the contiguous multi-sequence abstraction and the first to provide a scalable multi-sequence. MASON is thus an ideal building block for consistent, scalable services. Our evaluation shows MASON unlocks scalable throughput for two strongly-consistent services built on it.

[SoK] Evaluations in Industrial Intrusion Detection Research

Industrial systems are increasingly threatened by cyberattackswith potentially disastrous consequences. To counter suchattacks, industrial intrusion detection systems strive to timelyuncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchersfrom diverse backgrounds, resulting in 130 new detectionapproaches in 2021 alone. This huge momentum facilitatesthe exploration of diverse promising paths but likewise risksfragmenting the research landscape and burying promisingprogress. Consequently, it needs sound and comprehensibleevaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. Inthis paper, we therefore systematically analyze the evaluationmethodologies of this field to understand the current stateof industrial intrusion detection research. Our analysis of609 publications shows that the rapid growth of this researchfield has positive and negative consequences. While we observe an increased use of public datasets, publications stillonly evaluate 1.3 datasets on average, and frequently usedbenchmarking metrics are ambiguous. At the same time, theadoption of newly developed benchmarking metrics sees littleadvancement. Finally, our systematic analysis enables us toprovide actionable recommendations for all actors involvedand thus bring the entire research field forward.

[Tool] Automatically Extracting Hardware Descriptions from PDF Technical Documentation

The ever-increasing variety of microcontrollers aggravatesthe challenge of porting embedded software to new devicesthrough much manual work, whereas code generators can beused only in special cases. Moreover, only little technical documentation for these devices is available in machine-readableformats that could facilitate automating porting efforts. Instead, the bulk of documentation comes as print-orientedPDFs. We hence identify a strong need for a processor toaccess the PDFs and extract their data with a high quality toimprove the code generation for embedded software.

In this paper, we design and implement a modular processor for extracting detailed datasets from PDF files containing technical documentation using deterministic table processing for thousands of microcontrollers. Namely, we systematically extract device identifiers, interrupt tables, package and pinouts, pin functions, and register maps. In our evaluation, we compare the documentation from STMicro againstexisting machine-readable sources. Our results show thatour processor matches 96.5 % of almost 6 million referencedata points, and we further discuss identified issues in bothsources. Hence, our tool yields very accurate data with onlylimited manual effort and can enable and enhance a significant amount of existing and new code generation use cases inthe embedded software domain that are currently limited by alack of machine-readable data sources.