Parallel Programming Models


Distributed memory programming is more general than shared memory programming: programs written using that model will also run on shared memory machines, but not vice-versa. It also requires the user to carry out an extremely difficult task - explicitly partitioning the program data and tasks amongst the different processing elements.

On a distributed memory machine a parallel program consists of multiple processes running on the different processing elements (each processing element consists of a processor and its local memory, but as a shorthand term sometimes they are just refered to as "processors"). The processes must coordinate their activities and communicate data via messages of some sort.

Message Passing

The most common form is for the processes to use send/receive pairs. Suppose those take the form
send(int*  buffer;   /* pointer to data */
     int   count;    /* number of items */
     int   dest)     /* who to send to   */
and
recv(int*  buffer;   /* pointer to data */
     int   count;    /* number of items */
     int   src)      /* who to recv from */
In reality, message passing routines require more arguments as we will see with MPI. Now suppose that processes are assigned unique ID numbers from 0 to p-1; these are often called the ranks of the processes. If process 0 sends an integer k to process 1, it would call
send(buffer=&k, count=1, dest=1);
and process 1 executes
recv(buffer=&i, count=1, src=0);
Note that it is not necessary that process 1 receives k into a local variable with the same name.

Although each processor needs to execute different code, it is not necessary to write two different programs. Instead a single code branches based on the executing process's rank:

 if (my_rank == 0)
      send(&k, 1, 1);
 else if (my_rank == 1)
      recv(&i, 1, 1);
This is the basis of SPMD (single program, multiple data) programming. Even this simple example of message passing brings out two questions.
  1. Buffered versus synchronous communication: the message can be sent by having process 0 check if process 1 is ready to receive, and sending the integer k when that is verified. Another approach is for the system to buffer the message; send() causes the message to be copied to a system memory buffer, where it will wait until the recv() is executed by process 1.
  2. If multiple messages are sent from process 0 to 1, how does process 1 tell which is which?


  • Last updated: Mon Feb 19 14:36:04 EST 2007, added to preceed MPI notes