Skip to main content
eScholarship
Open Access Publications from the University of California

About

The Journal of Systems Research (JSys) is a diamond open-access journal covering all areas of computer systems research.

Articles

[Solution] ALCC: Migrating Congestion Control To The Application Layer In Cellular Networks

TCP is known to perform poorly in cellular network environ- ments. Yet, most mobile applications are explicitly built on the conventional TCP stack or implicitly leverage TCP tun- nels to various cellular middleboxes, including performance- enhancing proxies, application-specific edge proxies, VPN proxies and NAT boxes. Despite significant advances in the design of new congestion control (CC) protocols for cellular networks, deploying these protocols without bypassing the underlying TCP tunnels has remained a challenging propo- sition. This paper proposes the design of a new Application Layer Congestion Control (ALCC) framework that allows any new CC protocol to be implemented easily at the application layer, within or above an application-layer protocol that sits atop a legacy TCP stack. It drives it to deliver approximately the same as the native performance. The ALCC socket sits on top of a traditional TCP socket. Still, it can leverage the large congestion windows opened by TCP connections to carefully execute an application-level CC within the window bounds of the underlying TCP connection. This paper demonstrates how ALCC can be applied to three well-known cellular CC pro- tocols: Verus, Copa, and Sprout. For these protocols, ALCC can achieve comparable throughput and delay characteristics (within 3-10%) as the native protocols at the application layer across different networks and traffic conditions. ALCC al- lows a server-side implementation of these protocols with no client modifications and with zero bytes overhead. The ALCC framework can be easily integrated with off-the-shelf applications such as file transfers and video streaming.

[SoK] Identifying Mismatches Between Microservice Testbeds and Industrial Perceptions of Microservices

Industrial microservice architectures vary so wildly in their characteristics, such as size or communication method, that comparing systems is difficult and often leads to confusion and misinterpretation. In contrast, the academic testbeds used to conduct microservices research employ a very constrained set of design choices. This lack of systemization in these key design choices when developing microservice architectures has led to uncertainty over how to use experiments from testbeds to inform practical deployments and indeed whether this should be done at all. We conduct semi-structured interviews with industry participants to understand the representativeness of existing testbeds’ design choices. Surprising results included the presence of cycles in industry deployments, as well as a lack of clarity about the presence of hierarchies. We then systematize the possible design choices we learned about from the interviews, and identify important mismatches between our interview results and testbeds’ designs that will inform future, more representative testbeds.

[Solution] Prepare your video for streaming with Segue

We identify new opportunities in video streaming, involving the joint consideration of offline video chunking and on-line rate adaptation. Due to a video’s complexity varyingover time, certain parts are more likely to cause performanceimpairments during playback with a particular rate adaptationalgorithm. To address such an issue, we propose Segue ,which carefully uses variable-length video segments, and augment specific segments with additional bitrate tracks. The keynovelty of our approach is in making such decisions basedon the video’s time-varying complexity and the expected rateadaptation behavior over time. We propose and implementseveral methods for such adaptation-aware chunking. Ourresults show that Segue substantially reduces rebufferingand quality fluctuations, while maintaining video quality delivered; Segue improves QoE by 9% on average, and by 22%in low-bandwidth conditions. Finally, we view our problemframing as a first step in a new thread on algorithmic anddesign innovation in video streaming, and leave the readerwith several interesting open questions.

[Solution] End-to-end Scheduling of Real-time Task Pipelines on Multiprocessors

Task pipelines are common in today’s embedded systems, as data moves from source to sink in sensing-processing-actuation task chains. A real-time task pipeline is constructed by connecting a series of periodic tasks with data buffers. In a time-critical system, end-to-end timing and data-transfer properties of a task pipeline must be guaranteed. A guarantee could be mathematically expressed by assigning constraints to the tasks of a pipeline. However, deriving task scheduling parameters to meet end-to-end guarantees is an NP-hard constraint optimization problem. Hence, a traditional constraint solver is not a suitable runtime solution.

In this paper, we present a heuristic constraint solver algorithm, CoPi, to derive the execution times and periods of pipelined tasks that meet the end-to-end constraints and schedulability requirements. We consider two upper bound constraints on a task pipeline: end-to-end delay and loss-rate. After satisfying these constraints, CoPi schedules a pipeline as a set of asynchronous and data independent periodic tasks, under the rate-monotonic scheduling algorithm. Simulations show that CoPi has a comparable pipeline acceptance ratio and significantly better runtime than open-source MINLPsolvers. Furthermore, we use CoPi to map multiple task pipelines to a multiprocessor system. We demonstrate that a partitioned multiprocessor scheduling algorithm coupled with CoPi accommodates dynamically appearing pipelines, while attempting to minimize task migrations.

[SoK] The Great GAN Bake Off, An Extensive Systematic Evaluation of Generative Adversarial Network Architectures for Time Series Synthesis

There is no standard approach to compare the success ofdifferent neural network architectures utilized for time seriessynthesis. This hinders the evaluation and decision process,as to which architecture should be leveraged for an unknowndata set. We propose a combination of metrics, which empiri-cally evaluate the performance of neural network architecturestrained for time series synthesis. With these measurementswe are able to account for temporal correlations, spatial cor-relations and mode collapse issues within the generated timeseries.

We further investigate the interaction of different genera-tor and discriminator architectures between each other. Theconsidered architectures include recurrent neural networks,temporal convolutional networks and transformer-based net-works. So far, the application of transformer-based models islimited for time series synthesis. Hence, we propose a newtransformer-based architecture, which is able to synthesisetime series. We evaluate the proposed architectures and theircombinations in over 500 experiments, amounting to over2500 computing hours. We provide results for four data sets,one univariate and three multivariate. The data sets vary withregard to length, as well as patterns in temporal and spatialcorrelations.

We use our metrics to compare the performance of genera-tive adversarial network architectures for time series synthesis.To verify our findings we utilize quantitative and qualitativeevaluations. Our results indicate that temporal convolutionalnetworks currently outperform recurrent neural network andtransformer based approaches with regard to fidelity and flex-ibility of the generated time series data. Temporal convolu-tional network architecture are the most stable architecture fora mode collapse prone data set. The performance of the trans-former models strongly depends on the data set characteristics,it struggled to synthesise data sets with high temporal andspatial correlations. Discriminators with recurrent networkarchitectures suffer from vanishing gradients. We also show,that the performance of the generative adversarial networksdepends more on the discriminator rather than the generator.

[Solution] Mir-BFT: Scalable and Robust BFT for Decentralized Networks

This paper presents Mir-BFT, a robust Byzantine fault-tolerant (BFT) total order broadcast protocol aimed at maxi-mizing throughput on wide-area networks (WANs), targetingdeployments in decentralized networks, such as permissionedand Proof-of-Stake permissionless blockchain systems.

Mir-BFT is the first BFT protocol that allows multiple lead-ers to propose request batches independently (i.e., parallelleaders), while effectively precluding performance degrada-tion due to request duplication by rotating the assignmentof a partitioned request hash space to leaders. As this mech-anism removes the single-leader bandwidth bottleneck andexposes a computation bottleneck related to authenticatingclients even on a WAN, our protocol further boosts through-put using a client signature verification sharding optimization.Our evaluation shows that Mir-BFT outperforms state-of-the-art single-leader protocols and orders more than 60000 signedBitcoin-sized (500-byte) transactions per second on a widelydistributed setup (100 nodes, 1 Gbps WAN) with typical la-tencies of few seconds. Moreover, our evaluation exposesthe impact of duplicate requests on parallel leader protocolswhich Mir-BFT eliminates. We also evaluate Mir-BFT un-der different crash and Byzantine faults, demonstrating itsperformance robustness.

Mir-BFT relies on classical BFT protocol constructs, whichsimplifies reasoning about its correctness. Specifically, Mir-BFT is a generalization of the celebrated and scrutinizedPBFT protocol. In a nutshell, Mir-BFT follows PBFT “safety-wise”, with changes needed to accommodate novel featuresrestricted to PBFT liveness.