Skip to main content
eScholarship
Open Access Publications from the University of California
Cover page of Fast Transient Simulation of Lossy Transmission Lines

Fast Transient Simulation of Lossy Transmission Lines

(2006)

In this paper, an efficient approach is proposed for the problem of transient simulation of lossy transmission lines. The complexity of the conventional convolution approach for lossy transmission lines is reduced from O(N^2) to O(Nlog^2N) by utilizing a multilevel FFT convolution method, where N is the total number of time points. Numerical convolution formula that exploits both the analytical forms of the lossy transmission line impulse responses and adaptive time steps are developed for the multilevel FFT convolution method. A new breakpoint control scheme is also proposed to adaptively control the time step. Experimental results show the proposed approach is over 100 times faster than berkeley SPICE3 [1, 4] while remains the same accuracy.

Pre-2018 CSE ID: CS2006-0874

Cover page of Maintaining Safe Memory for Security, Debugging, and Multi-threading

Maintaining Safe Memory for Security, Debugging, and Multi-threading

(2006)

As transistor budgets grow enabling chip multi-core processors, adding hardware support to ensure the correctness and security of programs will be just as important, for the average user, as adding another core. The goal of our research is to develop hardware support for software checks and for multi-threaded synchronization that protects memory from corruption. This is the source of a significant number of bugs and security issues. We want efficient low-overhead run-time performance of these checks and synchronization so that it can be left on all of the time, even in production code releases. Bounds checking protects pointer and array accesses. It is the process of keeping track of the address boundaries for objects, and checking that the loads and stores do not stray outside bounds. It can serve two purposes: it can assist debugging by precisely capturing invalid memory accesses, and it can guarantee protection against buffer overflow attacks. Unfortunately, the high performance overhead of runtime bounds checking has prevented its inclusion in most released software. In this thesis, we analyze the sources of bounds checking overhead. We then consider hardware/software enhancements: the effect merging the check into a single instruction, software optimizations based on potential for overflow to eliminate checks, and changes to meta-data layout to limit copying overhead. Transactional Memories enable programmers to greatly simplify multithreaded code by eliminating lock synchronization and its attending deadlock and livelock problems. Unlike locks they also enable speculative concurrent execution through the critical section. Specialized transactional memory can also aid concurrent programming models by providing determinism only when needed at run-time. A key limitation of past transactional memory proposals are that they have a finite memory capacity. Virtualized transactional memory (unbounded in space and time) are desirable, as they can increase the scope of transactions’ use, and thereby further simplify a programmer’s job. In this thesis, we introduce Page-based Transactional Memory to support unbounded transactions. We combine transaction bookkeeping with the virtual memory system to support fast transaction conflict detection, commit, abort, and to maintain transactions’ speculative data.

Pre-2018 CSE ID: CS2006-0873

Cover page of Event-Driven Multithreaded Dyanmic Optimization

Event-Driven Multithreaded Dyanmic Optimization

(2006)

Dynamic optimization has been proposed to overcome many limitations of static optimization, such as inaccurate assumptions about the underlying processor architecture and lack of adaption to the program’s runtime behavior. However, existing dynamic optimization systems often impose high runtime overhead (software systems) or great hardware complexity (hardware systems), and only have limited runtime adaptability. This thesis proposes a new model of optimization, where optimization is triggered by hardware optimization events and is performed concurrently on an application while it is running. We introduce an event-driven multithreaded dynamic optimization architecture, called Trident. Trident is a software/hardware solution which strives to reduce software runtime overhead as well as reduce hardware complexity and inflexibility. Trident takes advantage of two key features in modern processors, abundant chip-level parallelism (through Simultaneous Multithreading, Chip Multithreading, or a combination) and increasing hardware support for runtime performance monitoring. Trident proposes generic, lightweight extensions of the hardware monitoring mechanisms to profile the program’s execution behavior. Hardware triggers an event for optimization upon detection of any interesting behavior. Lightweight helper threads are spawned to process these events, in parallel with the main thread. The combination of event-driven and concurrent optimization makes Trident extremely low overhead in both profiling and optimization. This enables Trident to perform more expensive optimizations than existing dynamic systems, and perform continuous recurrent optimizations without fear of performance loss. The power and flexibility of Trident enable many types of optimizations. In this thesis, we demonstrate it with an aggressive optimization, called speculative dynamic value specialization. We also demonstrate Trident’s power of continuous, gradual optimization by improving traditional software prefetching to better attack the classical memory wall problem. These approaches include adaptive dynamic software prefetching via self-repairing and accelerating precomputation based prefetching.

Pre-2018 CSE ID: CS2006-0872

Cover page of Active learning for visual object detection

Active learning for visual object detection

(2006)

One of the most labor intensive aspects of developing ac- curate visual object detectors using machine learning is to gather sufficient amount of labeled examples. We develop a selective sampling method, based on boosting, which dra- matically reduces the amount of human labor required for this task. We apply this method to the problem of detecting pedestrians from a video camera mounted on a moving car. We demonstrate how combining boosting and active learn- ing achieves high levels of detection accuracy in complex and variable backgrounds.

Pre-2018 CSE ID: CS2006-0871

Cover page of Privacy in GLAV Information Integration

Privacy in GLAV Information Integration

(2006)

We define and study formal privacy guarantees for information integration systems, where sources are related to a public schema by mappings given by source-to-target dependencies which express inclusion of unions of conjunctive queries with equality. This generalizes previous privacy work in the global-as-view publishing scenario and covers local-as-view as well as combinations of the two. We concentrate on logical security, where malicious users have the same level of access as legitimate users: they can issue queries against the global schema which are answered under ``certain answers'' semantics and then use unlimited computational power and external knowledge on the results of the queries to guess the result of a secret query (``the secret'') on one or more of the sources, which are not directly accessible. We do not address issues of physical security, which include how to prevent users from gaining unauthorized access to the data. We define both absolute guarantees: how safe is the secret? and relative guarantees: how much of the secret is additionally disclosed when the mapping is extended, for example to allow new data sources or new relationships between an existing data source and the global schema? We provide algorithms for checking whether these guarantees hold and undecidability results for related, stronger guarantees.

Pre-2018 CSE ID: CS2006-0869

Cover page of Single Image Appearance Measurement

Single Image Appearance Measurement

(2006)

Creating realistic renderings in computer graphics depends on having knowledge of the reflectance properties of materials in the scene. For opaque materials the bi-directional reflectance function (BRDF) captures these properties. Full BRDFs can be measured using highly-accurate gonioreflectometers or more recently with techniques using digital cameras. These approaches sample the BRDF either densely and use simple interpolation for rendering with the BRDF or acquire data sparsely and fit to a BRDF. Previous methods require images from a number of camera positions with a variety of known light positions. We present a technique to recover appearance model parameters from a single image under arbitrary lighting. Using a single high dynamic range image of material sample of known geometry, an environment map, and an iterative rendering and optimization approach we can recover parameters for a wide variety of materials in short amount of time with less stringent experimental constraints as compared to previous techniques. Our technique in not limited to recovering BRDF parameters for opaque materials. We can fit parameters for any low parameter appearance model that is used in a typical renderer. We have measured a number of opaque and transparent materials and we present results showing comparisons of an image of a sample material and the rendered counterpart using the recovered parameters. We further use our optimization approach to recover BRDF model parameters from a single render image of a number of materials rendered from a data-driven BRDF model.

Pre-2018 CSE ID: CS2006-0868

Cover page of Incremental Sparse Binary Vector Similarity Search in High-Dimensional
Space

Incremental Sparse Binary Vector Similarity Search in High-Dimensional Space

(2006)

Given a sparse binary matrix A and a sparse query vector x, can we efficiently identify the large entries of the matrix-vector product Ax? This problem occurs in document comparison, spam filtering, network intrusion detection, information retrieval, as well as other areas. We present an exact deterministic algorithm that takes advantage of the sparseness of A and x. Although in the worst case, the query complexity is linear in the number of rows of A, the amortized query complexity for a sequence of several similar queries depends only logarithmically on the size of A when the non-zero entries of A and the queries are distributed uniformly.

Pre-2018 CSE ID: CS2006-0866

Cover page of Distributed Application Management Using Plush

Distributed Application Management Using Plush

(2006)

Although a number of solutions exist for subtasks of application deployment and monitoring in large-scale, distributed environments, few tools provide a unified framework for distributed application management. Many of the existing tools address the management needs of a single class of applications or services that run in a specific environment and are not extensible enough to be used for other applications. In this paper, we discuss the design and implementation of Plush, a fully configurable application control infrastructure designed to meet the general requirements of several different classes of distributed applications. The paper discusses how users specifically define the flow of control needed using application building blocks provided by Plush. We also take a closer look at a few specific distributed applications to gain an understanding of how Plush provides support for each.

Pre-2018 CSE ID: CS2006-0864

Cover page of Resource Reclamation in Distributed Hash Tables

Resource Reclamation in Distributed Hash Tables

(2006)

In this paper, we discuss the problem of storage reclamation in DHTs that use redundancy to provide high data availability. Two approaches for performing storage reclamation for lazy repair are active polling and implicit garbage collection. We evaluated the overhead of these two approaches under low churn situation to high. The garbage collection is most effective for high churn peer to peer environments and the active approach is more suitable for low churn systems like educational/corporate environments.

Pre-2018 CSE ID: CS2006-0863

Cover page of ShortCuts: Using Soft State To Improve DHT Routing

ShortCuts: Using Soft State To Improve DHT Routing

(2006)

Distributed hash tables are increasingly being proposed as the core substrate for content delivery applications in the Internet, such as cooperative Web caches, Web index and search, and content delivery systems. The performance of these applications built on DHTs fundamentally depends on the effectiveness of request routing within the DHT. In this paper, we show how to use soft state to achieve routing performance that approaches the aggressive performance of one-hop schemes, but with an order of magnitude less overhead on average. We use three kinds of hint caches to improve routing latency: local hint caches, path hint caches, and global hint caches. Local hint caches use large successor lists to short cut final hops. Path hint caches store a moderate number of effective route entries gathered while performing lookups for other nodes. And global hint caches store direct routes to peers distributed across the ID space. Based upon our simulation results, we find that the combination of the hint caches significantly improves Chord routing performance: in a network of 4,096 peers, the hint caches enable Chord to route requests with average latencies only 6% more than algorithms that use complete routing tables with significantly less overhead.

Pre-2018 CSE ID: CS2006-0862