Skip to main content
eScholarship
Open Access Publications from the University of California

A process-oriented model for efficient execution of dataflow programs

Abstract

In a dataflow program, an instruction is enabled whenever all of its operands have been produced; at that time, the instruction packet is eligible for execution by a free processor. Compared to a von Neumann computer, the major sources of overhead are (1) the need for matching of token destined for the same instruction, (2) routing of tokens among processors, and (3) the fact that instructions are scheduled for execution individually, one at a time. In this paper, we present an execution model that reduces much of this overhead. A dataflow program is broken into sequences of instructions that must be executed sequentially due to their data dependencies. Each sequence is loaded into execution memory as a whole, were it forms a very simple process. A processor is then multiplexed among the ready processes in its local memory. The states of these processes change between running, ready, and blocked, depending on the arrival of operands. The main advantage is that operands produced and consumed within the same sequence are stored directly in one memory operation, thus bypassing the token matching and routing units. Consequently, when executing highly sequential programs, the dataflow machine "degenerates" to an efficient von Neumann computer.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View