Skip to main content
eScholarship
Open Access Publications from the University of California
Cover page of Dynamic Selection of Compression Formats to Reduce Transfer Delay

Dynamic Selection of Compression Formats to Reduce Transfer Delay

(2000)

The computational paradigm of the Internet is such that applications are retrieved from remote sites and processed locally or are transfered for remote execution. Given the gap between processor and network speeds, mechanisms are needed to compensate for transfer time in order to maintain acceptable performance of mobile programs. Compression is used to reduce transfer delay by reducing the number of bytes transfered through the use of compact file encoding. In this paper, we examine two techniques for reducing compression-based transfer delay using Java as our platform for mobile code. We first examine the benefit from Selective Compression, a profile-directed optimization that combines and compresses only class files that are used during execution (as opposed to the entire application). Our results show that this approach reduces transfer delay from 11% to 13% on average across all compression techniques and networks studied. The second technique we examine is dynamic selection of compression formats based upon the underlying network connectivity. We consider compression-based transfer delay as the time required for transfer and decompression of files. We show that the compression format that achieves the least delay varies greatly with the network bandwidth available. Therefore, we propose to store mobile programs at the server in different compression formats. Dynamic Compression Format Selection (DCFS) is then used on the client to predict the compression format that will result in the least delay given the bandwidth predicted to be available when transfer occurs. Our results show that DCFS reduces 36% of compression-based transfer delay on average, for the networks and wire-transfer formats studied. When combined with selective compression, we achieve 47% average reduction in delay (60% reduction over the use of jar files).

Pre-2018 CSE ID: CS2000-0650

Cover page of Reducing DRAM Power Using Compiler Assisted Refreshing

Reducing DRAM Power Using Compiler Assisted Refreshing

(2000)

The embedded market has always been a major source of income to the semiconductor market. As both general purpose and embedded processors are moving towards mobile markets different design criterion are becoming more important. The traditionally performance driven field of processor design now has power issues to deal with. Typically there is a performance requirement, and low power, low cost solutions must be found. In this paper we investigate a software and hardware solution for reducing DRAM power. We propose to mark DRAM rows that have data that will not be read again, and then have the memory controller avoid refreshing those rows. To mark the rows with dead data, we propose adding a new instruction freeNrows to the instruction set architecture, to communicate to the memory controller that N rows starting at the address provided should not be refreshed. If a store ever occurs to a non-refreshed row, then the memory controller will change the status of that row to refresh. For the heap memory, a custom allocation routine will be used to mark DRAM rows as non-refresh, when an object is freed from memory. For global memory, compiler analysis can be used to find global data objects (including large arrays) that have part or all of their object as dead leaving a region of code, and then a freeNrows instruction would be inserted to mark all those DRAM rows as non-refreshed. Our results show that on average 60% of the refreshes issued could be ignored without compromising correctness.

Pre-2018 CSE ID: CS2000-0649

Cover page of Reducing the Overhead of Compilation Delay

Reducing the Overhead of Compilation Delay

(2000)

The execution model for mobile dynamically-linked object--oriented programs has evolved from fast interpretation to a mix of interpreted and dynamically compiled execution. The primary motivation for dynamic compilation is that compiled code executes significantly faster than interpreted code. However, since dynamic compilation is performed while the application is running, the biggest challenge in using dynamic compilation is to reduce its overhead so as not to mitigate the runtime improvement that it delivers. Techniques for reducing dynamic compilation overhead can be classified as (1) decreasing the amount of compilation performed, or (2) overlapping compilation with useful work. In this paper, we first evaluate the effectiveness of Lazy Compilation as a technique for decreasing the amount of compilation performed. In lazy compilation, individual methods are compiled on demand (when called), thus avoiding the load-time delay of compiling all methods when a new class/module is loaded. Our experimental results (obtained by executing the specJVM Java programs on the Jalapeno JVM) show that lazy compilation results in compilation of 57% to 63% fewer methods, and a reduction in compilation time of approximately 30%, when compared to load-time compilation. Next, we present Profile-driven Background Compilation as a new technique for overlapping compilation with execution. The motivation for background compilation is to use idle cycles in multiprocessor systems to overlap compilation with application execution. Profile information is used to prioritize methods as candidates for background compilation. Our results show that background compilation can deliver significant reductions (26% to 79%) in total time i.e., compilation plus execution time, compared to serial (non-background)

Pre-2018 CSE ID: CS2000-0648

Cover page of Circular Coinduction

Circular Coinduction

(2000)

Circular coinduction is a new technique for behavioral reasoning that extends coinduction to specifications with circularities. We show that a congruence criterion due to Bidoit and Hennicker follows easily from circular coinduction, and we give some natural examples of circular coinductive proofs. A notation, called BOBJ, appropriate for our style of behavioral specification is also sketched. Finally, everything is conducted in a general framework that in a sense is the gcd of previous behavioral frameworks.

Pre-2018 CSE ID: CS2000-0647

Cover page of Encode-then-encipher encryption: How to exploit nonces or redundacy in plaintexts for efficient cryptography

Encode-then-encipher encryption: How to exploit nonces or redundacy in plaintexts for efficient cryptography

(2000)

We investigate the following approach to symmetric encryption: first encode the message in some trivial way (eg., prepend a counter and append a checksum), and then encipher the encoded message. Here "encipher" means to apply a cipher (i.e.~pseudorandom permutation) $F_\key$, where~$\key$ is the shared key. We show that if the encoding step incorporates a nonce (counter or randomness), in any way at all, then the resulting encryption scheme will be semantically secure. And we show that if the encoding step incorporates redundancy, in any form at all, then, as long as the receiver verifies the presence of this redundancy in the deciphered string, the resulting encryption scheme achieves message authenticity. The second result helps explain and justify the prevalent misunderstanding that encrypting messages which have redundancy is enough to guarantee message authenticity: the statement is actually true if "encrypting" is understood as "enciphering." Encode-then-encipher encryption can be used to robustly and efficiently exploit structured message spaces. If one is presented with messages known a~priori to contain something that behaves as a nonce, then privacy can be obtained with no increase in message length, and no knowledge of the structure of the message, simply by enciphering the message. Similarly, if one is presented with messages known a~priori to contain adequate redundancy, then message authenticity can be obtained with no increase in message length, and no knowledge of the structure of the message, simply by enciphering the message.

Pre-2018 CSE ID: CS2000-0646

Cover page of Application Scheduling on the Information Power Grid

Application Scheduling on the Information Power Grid

(2000)

One of the compelling reasons for developing the Information Power Grid (IPG) is to provide a platform for more rapid development and execution of simulations and other resource-intensive applications. However, the IPG will ultimately not be successful unless users and application developers can achieve execution performance for their codes. In this paper, we describe a performance-efficient approach to scheduling applications in dynamic multiple-user distributed environments such as the IPG. This approach provides the basis for application scheduling agents called {\bf AppLeS}. We describe the AppLeS methodology and discuss the lessons learned from the development of AppLeS for a variety of distributed applications. In addition, we describe an AppLeS-in-progress currently being developed for NASA's INS2D code, a distributed "parameter sweep" application.

Pre-2018 CSE ID: CS2000-0644

Cover page of Combining Workstations and Supercomputers to Support Grid Applications:  The Parallel Tomography Experience

Combining Workstations and Supercomputers to Support Grid Applications: The Parallel Tomography Experience

(2000)

Computational Grids are becoming an increasingly important and powerful platform for the execution of large-scale, resource-intensive applications. However, it remains a challenge for applications to tap the potential of Grid resources in order to achieve performance. In this paper, we illustrate how applications can leverage Grids to achieve performance through coallocation. We describe our experiences developing a scheduling strategy for a real-life parallel tomography application targeted to Grids which contain both workstations and parallel supercomputers. Our strategy uses dynamic information exported by a supercomputer's batch scheduler to simultaneously schedule on workstations and immediately available supercomputer nodes. This strategy is of great practical interest because it combines resources available to the typical research lab: time-shared workstations and CPU time in remote space-shared supercomputers. We show that this strategy improves the performance of the parallel tomography application compared to traditional scheduling strategies, which target the application to either type of resource alone.

Pre-2018 CSE ID: CS2000-0642

Cover page of Limited Mobile Agents: A Practical Approach

Limited Mobile Agents: A Practical Approach

(1999)

Mobile Agents are a newly emerging network application technology in which programs can travel throughout a network to run on machines other than the originating host. Extensive research on Mobile Agents has been ongoing for many years and many Mobile Agent systems have been developed. However, there has not yet been any widespread acceptance of Mobile Agents as a viable technology by the Internet community. We examine the existing forms of mobile code technology and suggest using a limited form of Mobile Agent in conjunction with proxy servers for the purpose of client customization. We explore the issues of security, flexibility, deployability, and the utility of client customization using mobile agents and proxies, with the aim of developing a practical system that can be integrated into today's Internet. The benefits of using these agents for customization include performance improvements through the reduction of network latency, as well as improving the client's interface to distributed data sources through merging, filtering, and custom displays.

Pre-2018 CSE ID: CS2000-0641

Cover page of Agent Usage Patterns: Bridging the Gap Between Agent-Based Applications

Agent Usage Patterns: Bridging the Gap Between Agent-Based Applications

(1999)

The concept of agents -- programs that are capable of transporting themselves across a [heterogeneous] network to execute and return results -- is a fascinating if troubled area of research. While theoretical advantages of agents have been well-established, few agent-based applications have been commercially successful. We argue that the lack of applications stems from a lack of understanding essential agent usage patterns. In this paper, we identify a set of fundamental patterns that support the design of agent-based applications that scale performance, reliability, and security. To evaluate their performance, some of these patterns were implemented in Java to demonstrate customizable and scalable performance.

Pre-2018 CSE ID: CS1999-0638