Flynn's Taxonomy of Parallel Systems

In the 1960's Flynn identified four categories of computing, based on the number of instruction streams and the number of data streams.
  1. SISD is single instruction, single data. The system executes one instruction at a time, operating on a single datum.
  2. SIMD is single instruction, multiple data. The system executes one instruction at a time, but that operation is done on multiple data items concurrently. Graphics processor units (GPUs) work this way. Although vector machines do not necessarily operate on multiple data simultaneously and are instead heavily pipelined, they are also considered to fit in this category. So, for example, a vector update operation y = y + alpha*x could be performed by having each processor simultaneously handling a different component of the vectors. The mental model is of a master or control processor which at every cycle sends a common instruction to each processor to execute.
  3. MIMD is multiple instruction, multiple data. The system executes multiple operations on multiple data items simultaneously. MIMD allows arbitrary operations (e.g. multiple instructions) on different data at the same time. In a MIMD parallel machine each processor may be programmed as a standard sequential machine, running sequential jobs. A particular machine can exhibit both SIMD and MIMD parallelism at different levels, such as a multiprocessor Cray T90. MIMD parallel is more flexible and more common than SIMD parallelism, which now usually appears within functional units or memory systems.
  4. MISD is multiple instruction, single data, a category that essentially does not exist. Sometimes systolic arrays are construed as MISD. However, systolic arrays have not been commerically viable because they tend to be special purpose, circumventing the raison d'etre of programmable computers. Their main payoff has been for generating academic publications.

Flynn did this before machines of either category existed, and it did clarify thought and bring to a stop arguments that had at their basis sets of people with differing ideas of what a "parallel machine" was.

A category not proposed by Flynn is prominent now: SPMD or single program, multiple data. SPMD is really a form of MIMD, where each processor is executing the same program but on different data. However, each processor runs the same program; in MIMD each processor can be running a different program. This is more of a programming model distinction than a hardware model distinction. SPMD is more restrictive than MIMD, but it is now the dominant approach for distributed memory parallel computing. The reason is simple but terrifying: debugging an SPMD code is difficult and time consuming. Debugging a full-blown MIMD computation is nearly impossible.