Pthread Synchronization


Because pthreads have mostly shared memory, there is no need for communication. However, there is a strong need for synchronization. Pthreads comes only with a small set of synchronization primitives, out of which you have to build your own (higher-level) synchronization methods.

A critical section of a code is part which must be executed "atomically", that is, by only one thread at a time. Suppose that in the dotproduct example we had each subtask just add its part of the summation into the variable sum (which was initialized to 0.0 before the threads were created). Although that seems to be alright, beware that the single C instruction

      sum += val;
can turn into several machine instructions:
  load sum from memory
  load val from memory
  add val to sum
  write sum back to memory
Suppose that val in task 1 is 1.5 and val in task 2 is -3.2. Suppose task 1 reads sum and starts adding val to it. At the same time, task 2 reads sum, adds its val to it, and writes sum back to memory. Then task 1 finishes its add and writes its copy of sum back to memory. After that, both tasks are completed - but the value of sum is 1.5, not (-3.2 + 1.5 ) = -1.7.

The addition of val into sum is a critical section; only one task at a time can be allowed to execute it. More generally, a critical section can be a lengthy code section.

Mutexes

Pthreads provide mutexes. A thread that wants to execute a critical section "locks" the mutex, at which time it is said to "own the lock" or "own the mutex". It executes the critical section, then "unlocks" the mutex so that other threads can then access the resource guarded by the mutex. The critical section is also said to be "protected" by the mutex. Pthread implementations of the mutex must guarantee that when a thread has a lock on a mutex no other thread can write the protected data or read it when it is in an intermediate state.

In pthreads, using a mutex is a four-step process:

  1. Declare and initialize the mutex:
      pthread_mutex_t *mutex;
      const pthread_mutexattr_t *attr;
      pthread_mutex_init(mutex, attr)
    
  2. Get a lock on the mutex
      pthread_mutex_lock(mutex);
    
  3. Do the deed: execute the critical section
  4. Unlock the mutex
      pthread_mutex_unlock(mutex);
    
If another thread tries to lock the mutex (ie., execute step 1 above) when another thread already has it, then the second thread will block until the mutex gets unlocked.

The earlier dotproduct example can then be updated. First, add the declaration

pthread_mutex_t sum_guard = PTHREAD_MUTEX_INITIALIZER;
as a global variable. This provides static initialization of the mutex at the same time as it declares it. Then inside of the local_dot() function, replace
locsum[seg->segment] = val;
with
     pthread_mutex_lock(&sum_guard);
     sum += val;
     pthread_mutex_unlock(&sum_guard);
and eliminate the line
sum += locsum[k];
from dotprod(). This replaces the explicit synchronization with pthread_join() by the mutex protected addition directly into sum.

Condition Variables

Condition variables provide a more general sychronization mechanism. Mutexes are like Boolean variables, either locked or unlocked. Condition variables allow you to wait on a more general condition to occur. However, every condition variable must be protected by a mutex - they are not strictly separate creatures. The procedure is:

  1. A thread obtains the mutex lock and tests the condition under the protection of the mutex using pthread_cond_wait(). No other thread should alter the condition without holding that mutex.
  2. If the condition is true, the thread performs its task and releases the mutex.
  3. If the condition is not true, then
    1. the mutex is automatically released for the thread,
    2. the thread is put to sleep waiting on the condition variable to become true.
  4. When another thread changes the condition to become true, it wakes the sleeping thread using pthread_cond_signal(). The woken thread then
    1. reacquires the mutex automatically
    2. reevaluates the condition
    3. if the condition is not true, goes to sleep
    4. if the condition is true, continues with its work and releases the mutex.
So when you look at condition variable code, it may seem that a deadlock occurs - because the mutex lock is obtained and then the thread waits on the condition variable. Just keep in mind that the call to pthread_cond_wait() unlocks the mutex for you.

A problem is in 4b and 4c. You must retest the condition on being awoken! For one thing, the condition may have changed before your thread was brought fully back to life. More deadly still, the pthread standard allows spurious wake up calls - ones not initiated by your code - to wake up the task. So always retest the condition on waking!

You can do two really stupid things with condition variables, which should be avoided. Use only a single mutex with a given condition variable even though the standard lets you use multiple mutexes. Associate only one condition with the condition variable (although that could be a compound condition, like x > 3 and mod(n,p) = 0).

Here is an example of using a condition variable in the dotproduct routine. Instead of using pthread_join(), we create a count variable that indicates how many threads have not finished adding their partial sums to the sum. When that count gets to zero, we can return. This allows running the other threads in detached mode, so we don't do pthread_join() on them. Note that the retest is done using a while loop; this is a standard idiom in pthread programming.

#include
#include
#include

#define FALSE 0
#define TRUE 1
#define CHUNKSZ 1000

pthread_mutex_t sum_guard = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t sum_count_cv = PTHREAD_COND_INITIALIZER;
int count;
double sum = 0.0;

typedef struct{int segment;     /* Segment number */
               int length;      /* Segment length */
               double *x;      
               double *y; } dp_args;

void local_dot(dp_args *seg) {
  int k;
  double *x = seg->x;
  double *y = seg->y;
  double val = 0.0;
  for (k = 0; k < seg->length; k++)
    val += x[k]*y[k];
  pthread_mutex_lock(&sum_guard);
    sum += val;
    count--;
    if (count == 0) pthread_cond_signal(&sum_count_cv);
  pthread_mutex_unlock(&sum_guard);
};

double dotprod(int n, double x[], double y[]) {
   dp_args *seg; 
   pthread_t *chunk;
   static int chunksize = CHUNKSZ;  /* Not a great way to do this */
   int k = 0;
   int retval;
   int start = 0;
   int odd = FALSE;
   int nsegs = n/chunksize;

   if (n%chunksize != 0) {nsegs++; odd = TRUE;}
   chunk = (pthread_t *) malloc(nsegs*sizeof(pthread_t));
   seg   = (dp_args *) malloc(nsegs*sizeof(dp_args));
   if (seg == NULL || chunk == NULL) {
       printf("failure to allocate chunk/seg\n");
       exit(-1);
   }

/* ------------------------------------------------------- */
/* Spawn off nsegs threads to compute chunks of dotproduct */
/* ------------------------------------------------------- */

   sum = 0.0;
   count = nsegs;
   for (k = 0; k < nsegs; k++) {
/*   ------------------------------------------- */
/*   Load up the dp_args object for k-th segment */
/*   ------------------------------------------- */
     start = k*chunksize;
     seg[k].length = chunksize;
     if (odd == TRUE && k == nsegs-1) {
        seg[k].length = n - (nsegs-1)*chunksize;
     }
     seg[k].x  = &x[start];
     seg[k].y  = &y[start];
     seg[k].segment  = k;

/*   ----------------------------------- */
/*   Try to create and detach the thread */
/*   ----------------------------------- */
     printf("Spawning thread %d \n", k);
     retval = pthread_create(&chunk[k], NULL, (void *(*)(void *)) local_dot,
                           (void *) &(seg[k]));
     pthread_detach(chunk[k]);
   }


/* ------------------------------------------*/
/* Wait until other threads have completed.  */
/* ------------------------------------------*/
   pthread_mutex_lock(&sum_guard);
   while (count > 0)
      pthread_cond_wait(&sum_count_cv, &sum_guard);
   pthread_mutex_unlock(&sum_guard);

   free(chunk);
   free(seg);
   return sum;
};

  • Last Modified: Tue 06 Feb 2018, 04:17 PM