Associaitve Computing
Kent State University
Computer Science Department

Multiple Associative Computing (MASC)


The Multiple Associative Computing (MASC) model [Potter94] of associative SIMD computing grew out of work on the STARAN and MPP computers at Goodyear Aerospace Corporation. Associative memory is accessed by content rather than by address. Data called a "comparand" is sent to the memory, and those memory words that match the comparand set a "responder" flag. Additional circuitry can limit the comparison to certain fields of each memory word, provide the logical OR of the responders, count the number of responders, etc. A global mask register selects the memory words that participate while the fields to be compared are selected by program logic (an address), and responders indicate a successful match. The IS controller broadcasts instructions to all PEs. PEs interrogate their local data to determine whether or not to perform that instruction. Innovative cell networks of various kinds can be used to connect the PEs for data movement. Associative computing is discussed in detail in [Potter92] and [Krikelis97]. (Earlier work on associative computing is described in [Potter87a, Bridges90, Potter92]).

In the MASC model, shown conceptually in the figure below, a cell consists of a PE and a local memory. The "memory" of the associative computer is an array of these cells. There is no global shared memory between cells, but each PE can access the memory in its own cell or neighboring cells via the cell network. Related data items are grouped together into records and are typically stored one per cell (we assume there are more cells than data). Virtual PEs are easily instantiated by packing multiple records into a cell's memory.

In the MASC model, there are multiple instruction stream (IS)controllers (but there are many fewer than the number of PEs). The ISs contain the program being executed, and can broadcast an instruction to all cells in unit time. Each cell listens to only one IS -- initially all cells listen to the same IS, but the cells can switch to other ISs in response to their local data and commands from the current IS. An active PE conditionally executes the commands it receives from its IS based on the contents of its mask register, while an inactive PE simply listens to the commands of its IS until it is reactivated.

Cells without further work to do, called idle cells, are assigned to a specific IS that manages the idle cells. If another IS then requests additional cells, idle cells are either assigned to it dynamically, or else it waits until idle cells become available. This dynamic assignment directly supports control parallelism, as forking a thread simply involves assigning the new thread to an idle IS. A similar form of control parallelism was proposed in [Schimmel92] using superscalar techniques; however, the MASC model appears to be more general.

The MASC model supports associative computing by providing constant time functions for associative searching and selection, logical operations (AND and OR), maximum, minimum, least upper bound, and greatest lower bound. Constant time searching permits the simultaneous examination and identification of all those cells that meet the search criteria. These identified cells, called responders, can then become the new set of active cells. An IS has the ability to detect the presence of responders, access active cells in parallel or sequentially, and restore previously active cells to the set of idle cells. The constant time maximum (minimum) functions retrieve either the greatest (least) value of a cell variable or the index of the PE containing that value.

While broadcast data transfers can be handled efficiently by the IS processors and their interconnection network, MASC also assumes the existence of a cell interconnection network to handle parallel data movement. However, a surprising number of efficient (even optimal) algorithms do not require any network, which is useful since network movements can be considerably slower than PE operations.