Skip to main content
eScholarship
Open Access Publications from the University of California
Cover page of Estimating Profitability of Alternative Crypto-currencies

Estimating Profitability of Alternative Crypto-currencies

(2018)

Digital currencies have flourished in recent years, buoyed by the tremendous success of Bitcoin. These blockchain-based currencies, called altcoins, have attracted enthusiasts who enter the market by mining or buying them. To mine or to buy, however, can be a difficult decision; each altcoin is different from another, and the market tends to be volatile. In this work, we analyze the profitability of mining and speculation for 36 altcoins using real-world blockchain and trade data. Using opportunity cost as a metric, we estimate the mining cost for a coin with respect to a more popular coin. For every dollar invested in mining or buying a coin, we also estimate the revenue under various conditions, such as time of market entry and hold positions. While some coins offer the potential for spectacular returns, many follow a simple bubble-and-crash scenario, which highlights the extreme risks---and potential gains---in altcoin markets.

Pre-2018 CSE ID: CS2017-1019

Cover page of Hardening the NOVA File System

Hardening the NOVA File System

(2017)

Emerging fast, persistent memories will enable systems that combine conventional DRAM with large amounts of non-volatile main memory (NVMM) and provide huge increases in storage performance. Fully realizing this potential requires fundamental changes in how system software manages, protects, and provides access to data that resides in NVMM. We address these needs by describing a NVMM-optimized file system called NOVA that is both fast and resilient in the face of corruption due to media errors and software bugs. We identify and propose solutions for the unique challenges in hardening an NVMM file system, adapt state-of-the-art reliability techniques to an NVMM file system, and quantify the performance and storage overheads of these techniques. We find that NOVA's reliability features increase file system size system size by 14.9% and reduce application-level performance by between 2% and 38%.

Pre-2018 CSE ID: CS2017-1018

Cover page of Echidna: Programmable Schematics to Simplify PCB Design

Echidna: Programmable Schematics to Simplify PCB Design

(2016)

In this paper we introduce Echidna, a hybrid schematic/ text-based language for describing PCB circuit schematics. Echidna allows designers to use high-level programming con- structs to describe schematics, supports modular, reusable design components with well-defined interfaces, and provides for complex parameterization of those modules. Echidna deeply integrates a high-level programming language into a schematic-based design flow. The designer can describe schematics in code, as a schematic, or as a seamless combination of the two. We demonstrate its usefulness with several case studies.

Pre-2018 CSE ID: CS2016-1017

Cover page of ASIC Clouds: Specializing the Datacenter

ASIC Clouds: Specializing the Datacenter

(2016)

GPU and FPGA-based clouds have already demonstrated the promise of accelerating computing-intensive workloads with greatly improved power and performance. In this paper, we examine the design of ASIC Clouds, which are purpose-built datacenters comprised of large arrays of ASIC accelerators, whose purpose is to optimize the total cost of ownership (TCO) of large, high-volume chronic computations, which are becoming increasingly common as more and more services are built around the Cloud model. On the surface, the creation of ASIC clouds may seem highly improbable due to high NREs and the inflexibility of ASICs. Surprisingly, however, large-scale ASIC Clouds have already been deployed by a large number of commercial entities, to implement the distributed Bitcoin cryptocurrency system. We begin with a case study of Bitcoin mining ASIC Clouds, which are perhaps the largest ASIC Clouds to date. From there, we design three more ASIC Clouds, including a YouTube-style video transcoding ASIC Cloud, a Litecoin ASIC Cloud, and a Convolutional Neural Network ASIC Cloud and show 2-3 orders of magnitude better TCO versus CPU and GPU. Among our contributions, we present a methodology that given an accelerator design, derives Pareto-optimal \AC{} Servers, by extracting data from place-and-routed circuits and computational fluid dynamic simulations, and then employing clever but brute-force search to find the best jointly-optimized ASIC, DRAM subsystem, motherboard, power delivery system, cooling system, operating voltage, and case design. Moreover, we show how data center parameters determine which of the many Pareto-optimal points is TCO-optimal. Finally we examine when it makes sense to build an ASIC Cloud, and examine the impact of ASIC NRE.

Pre-2018 CSE ID: CS2016-1016

Cover page of Gullfoss: Accelerating and Simplifying Data Movement among Heterogeneous Computing and Storage Resources

Gullfoss: Accelerating and Simplifying Data Movement among Heterogeneous Computing and Storage Resources

(2015)

High-end computer systems increasingly rely on heterogeneous computing resources. For instance, a datacenter server might include multiple CPUs, high-end GPUs, PCIe SSDs, and high-speed networking interface cards. All of these components provide computing resources and operate at a high bandwidth. Coordinating the movement of data and scheduling computation across these resources is a complex task, as current programming models require system developers to explicitly schedule data transfers. Moving data is also inefficient in terms of both performance and energy costs: some applications running on GPU-equipped systems spend over 55% of their execution time and 53% of energy moving data between the storage device and the GPU. This paper proposes Gullfoss, a system that provides a simplified programming model for these heterogeneous computing systems. Gullfoss provides a high-level interface for specifying an application’s data movement requirements, and dynamically schedules data transfers while accounting for current system load and program requirements. Our initial implementation of Gullfoss focuses on data transfers between an SSD and a GPU, eliminating wasteful transfers to and from main memory as data moves between the two. This saves memory energy and bandwidth, leaving the CPU free to do useful work or operate at a lower frequency to improve energy efficiency. We implement and evaluate Gullfoss using commercially available hardware components. Gullfoss achieves 1.46× speedup, reduces energy consumption by 28%, and improves energy-delay product by 41%, comparing with systems without Gullfoss. For multi-program workloads, Gullfoss shows 1.5× speedup. Gullfoss also improves the performance of a GPU-based MapReduce framework by 10%.

Pre-2018 CSE ID: CS2015-1015

Cover page of Experience in Building a Comparative Performance Analysis Engine for a Commercial System

Experience in Building a Comparative Performance Analysis Engine for a Commercial System

(2015)

Performance testing is a standard practice for evolving systems to detect performance issues proactively. It samples various performance metrics that will be compared with a stable baseline to judge whether the measurement data is abnormal. This type of comparative analysis requires domain expertise, which can take experienced performance analysts days to conduct. In an effort to build an automatic solution for a leading data warehousing company to improve the comparative performance analysis efficiency, we implemented machine learning approaches proposed by existing research. But the initial result has a 86% false negative rate on average, which means the majority of performance defects would be missed. To investigate causes for this unsatisfying result, we take a step back to revisit the performance data itself and find several important data related issues that are overlooked by existing work. In this paper, we discuss in detail these issues and share our hindsights to address them. With the new learning scheme we devise, we are able to reduce the false negative rate to as low as 16% and achieve a balanced accuracy of 0.91, which enables the analysis engine to be practically adopted.

Pre-2018 CSE ID: CS2015-1014

Cover page of Sorting 100 TB on Google Compute Engine

Sorting 100 TB on Google Compute Engine

(2015)

Google Compute Engine offers a high-performance, cost-effective means for running I/O-intensive applications. This report details our experience running large-scale, high- performance sorting jobs on GCE. We run sort applications up to 100 TB in size on clusters of up to 299 VMs, and find that we are able to sort data at or near the hardware capabilities of the locally attached SSDs. In particular, we sort 100 TB on 296 VMs in 915 seconds at a cost of $154.78. We compare this result to our previous sorting experience on Amazon Elastic Compute Cloud and find that Google Compute Engine can deliver similar levels of performance. Although individual EC2 VMs have higher levels of performance than GCE VMs, permitting significantly smaller cluster sizes on EC2, we find that the total dollar cost that the user pays on GCE is 48% less than the cost of running on EC2.

Pre-2018 CSE ID: CS2015-1013

Cover page of Power Side Channels in Security ICs: Hardware Countermeasures

Power Side Channels in Security ICs: Hardware Countermeasures

(2015)

Power side-channel attacks are a very effective cryptanalysis technique that can infer secret keys of security ICs by monitoring a chip’s power consumption. Since the emergence of practical attacks in the late 90s, they have been a major threat to many cryptographic-equipped devices including smart cards, encrypted FPGA designs, and mobile phones. Designers and manufacturers of cryptographic devices have in response developed various countermeasures for protection. Attacking methods have also evolved to counteract resistant implementations. This paper reviews foundational power analysis attack techniques and examines a variety of hardware design mitigations. The aim is to highlight exposed vulnerabilities in hardware-based countermeasures for future more secure implementations.

Pre-2018 CSE ID: CS2015-1012

Cover page of The Non-Volatile Memory Technology Database (NVMDB)

The Non-Volatile Memory Technology Database (NVMDB)

(2015)

We present a survey of non-volatile memory technology papers published between 2000 and the present day in leading journals and conference proceedings in the area of integrated circuit design and semiconductor devices. We present a summary of the data provided in these papers and use that data to model basic aspects of their performance at an architectural level. The full data set and complete bibliography is available online.

Pre-2018 CSE ID: CS2015-1011

Cover page of S2Sim: Smart Grid Swarm Simulator

S2Sim: Smart Grid Swarm Simulator

(2015)

The Smart Grid is drawing attention from various research areas. Distributed control algorithms at different scales within the grid are being developed and deployed; yet their effects on each other and the grid's health and stability has not been sufficiently studied due to the lack of a capable simulator. Simulators in the literature can solve the power flow by modeling the physical system, but fail to address the cyber physical aspect of the smart grid with multiple agents. To answer these questions, we have developed S2Sim: Smart Grid Swarm Simulator. S2Sim allows any object within the grid to have its own independent control, transforming physical elements into cyber-physical representations. Objects can have any size ranging from a light bulb to a whole microgrid and their representative data can be supplied from a real device, simulation, distributed control algorithm or a database. S2Sim shields the complexity of the power flow solution from the control algorithms and directly supplies information on system stability. This information can be used to give feedback signals like price or regulation incentives by virtual coordinators to form closed-loop control. Using three case studies, we illustrate how different distributed control algorithms can have varying effects on system stability, which would go undetected in the absence of our simulator. Furthermore, the case studies show that a control algorithm cannot be justified without being tested within the grid picture.

Pre-2018 CSE ID: CS2015-1010