11.6. Thread SynchronizationWhen multiple threads of control share the same memory, we need to make sure that each thread sees a consistent view of its data. If each thread uses variables that other threads don't read or modify, no consistency problems exist. Similarly, if a variable is read-only, there is no consistency problem with more than one thread reading its value at the same time. However, when one thread can modify a variable that other threads can read or modify, we need to synchronize the threads to ensure that they don't use an invalid value when accessing the variable's memory contents. When one thread modifies a variable, other threads can potentially see inconsistencies when reading the value of the variable. On processor architectures in which the modification takes more than one memory cycle, this can happen when the memory read is interleaved between the memory write cycles. Of course, this behavior is architecture dependent, but portable programs can't make any assumptions about what type of processor architecture is being used. Figure 11.7 shows a hypothetical example of two threads reading and writing the same variable. In this example, thread A reads the variable and then writes a new value to it, but the write operation takes two memory cycles. If thread B reads the same variable between the two write cycles, it will see an inconsistent value. Figure 11.7. Interleaved memory cycles with two threads
To solve this problem, the threads have to use a lock that will allow only one thread to access the variable at a time. Figure 11.8 shows this synchronization. If it wants to read the variable, thread B acquires a lock. Similarly, when thread A updates the variable, it acquires the same lock. Thus, thread B will be unable to read the variable until thread A releases the lock. Figure 11.8. Two threads synchronizing memory access
You also need to synchronize two or more threads that might try to modify the same variable at the same time. Consider the case in which you increment a variable (Figure 11.9). The increment operation is usually broken down into three steps.
Figure 11.9. Two unsynchronized threads incrementing the same variableIf two threads try to increment the same variable at almost the same time without synchronizing with each other, the results can be inconsistent. You end up with a value that is either one or two greater than before, depending on the value observed when the second thread starts its operation. If the second thread performs step 1 before the first thread performs step 3, the second thread will read the same initial value as the first thread, increment it, and write it back, with no net effect. If the modification is atomic, then there isn't a race. In the previous example, if the increment takes only one memory cycle, then no race exists. If our data always appears to be sequentially consistent, then we need no additional synchronization. Our operations are sequentially consistent when multiple threads can't observe inconsistencies in our data. In modern computer systems, memory accesses take multiple bus cycles, and multiprocessors generally interleave bus cycles among multiple processors, so we aren't guaranteed that our data is sequentially consistent. In a sequentially consistent environment, we can explain modifications to our data as a sequential step of operations taken by the running threads. We can say such things as "Thread A incremented the variable, then thread B incremented the variable, so its value is two greater than before" or "Thread B incremented the variable, then thread A incremented the variable, so its value is two greater than before." No possible ordering of the two threads can result in any other value of the variable. Besides the computer architecture, races can arise from the ways in which our programs use variables, creating places where it is possible to view inconsistencies. For example, we might increment a variable and then make a decision based on its value. The combination of the increment step and the decision-making step aren't atomic, so this opens a window where inconsistencies can arise. MutexesWe can protect our data and ensure access by only one thread at a time by using the pthreads mutual-exclusion interfaces. A mutex is basically a lock that we set (lock) before accessing a shared resource and release (unlock) when we're done. While it is set, any other thread that tries to set it will block until we release it. If more than one thread is blocked when we unlock the mutex, then all threads blocked on the lock will be made runnable, and the first one to run will be able to set the lock. The others will see that the mutex is still locked and go back to waiting for it to become available again. In this way, only one thread will proceed at a time. This mutual-exclusion mechanism works only if we design our threads to follow the same data-access rules. The operating system doesn't serialize access to data for us. If we allow one thread to access a shared resource without first acquiring a lock, then inconsistencies can occur even though the rest of our threads do acquire the lock before attempting to access the shared resource. A mutex variable is represented by the pthread_mutex_t data type. Before we can use a mutex variable, we must first initialize it by either setting it to the constant PTHREAD_MUTEX_INITIALIZER (for statically-allocated mutexes only) or calling pthread_mutex_init. If we allocate the mutex dynamically (by calling malloc, for example), then we need to call pthread_mutex_destroy before freeing the memory.
To initialize a mutex with the default attributes, we set attr to NULL. We will discuss nondefault mutex attributes in Section 12.4. To lock a mutex, we call pthread_mutex_lock. If the mutex is already locked, the calling thread will block until the mutex is unlocked. To unlock a mutex, we call pthread_mutex_unlock.
If a thread can't afford to block, it can use pthread_mutex_trylock to lock the mutex conditionally. If the mutex is unlocked at the time pthread_mutex_trylock is called, then pthread_mutex_trylock will lock the mutex without blocking and return 0. Otherwise, pthread_mutex_trylock will fail, returning EBUSY without locking the mutex. ExampleFigure 11.10 illustrates a mutex used to protect a data structure. When more than one thread needs to access a dynamically-allocated object, we can embed a reference count in the object to ensure that we don't free its memory before all threads are done using it. We lock the mutex before incrementing the reference count, decrementing the reference count, and checking whether the reference count reaches zero. No locking is necessary when we initialize the reference count to 1 in the foo_alloc function, because the allocating thread is the only reference to it so far. If we were to place the structure on a list at this point, it could be found by other threads, so we would need to lock it first. Before using the object, threads are expected to add a reference count to it. When they are done, they must release the reference. When the last reference is released, the object's memory is freed. Figure 11.10. Using a mutex to protect a data structure#include <stdlib.h> #include <pthread.h> struct foo { int f_count; pthread_mutex_t f_lock; /* ... more stuff here ... */ }; struct foo * foo_alloc(void) /* allocate the object */ { struct foo *fp; if ((fp = malloc(sizeof(struct foo))) != NULL) { fp->f_count = 1; if (pthread_mutex_init(&fp->f_lock, NULL) != 0) { free(fp); return(NULL); } /* ... continue initialization ... */ } return(fp); } void foo_hold(struct foo *fp) /* add a reference to the object */ { pthread_mutex_lock(&fp->f_lock); fp->f_count++; pthread_mutex_unlock(&fp->f_lock); } void foo_rele(struct foo *fp) /* release a reference to the object */ { pthread_mutex_lock(&fp->f_lock); if (--fp->f_count == 0) { /* last reference */ pthread_mutex_unlock(&fp->f_lock); pthread_mutex_destroy(&fp->f_lock); free(fp); } else { pthread_mutex_unlock(&fp->f_lock); } } Deadlock AvoidanceA thread will deadlock itself if it tries to lock the same mutex twice, but there are less obvious ways to create deadlocks with mutexes. For example, when we use more than one mutex in our programs, a deadlock can occur if we allow one thread to hold a mutex and block while trying to lock a second mutex at the same time that another thread holding the second mutex tries to lock the first mutex. Neither thread can proceed, because each needs a resource that is held by the other, so we have a deadlock. Deadlocks can be avoided by carefully controlling the order in which mutexes are locked. For example, assume that you have two mutexes, A and B, that you need to lock at the same time. If all threads always lock mutex A before mutex B, no deadlock can occur from the use of the two mutexes (but you can still deadlock on other resources). Similarly, if all threads always lock mutex B before mutex A, no deadlock will occur. You'll have the potential for a deadlock only when one thread attempts to lock the mutexes in the opposite order from another thread. Sometimes, an application's architecture makes it difficult to apply a lock ordering. If enough locks and data structures are involved that the functions you have available can't be molded to fit a simple hierarchy, then you'll have to try some other approach. In this case, you might be able to release your locks and try again at a later time. You can use the pthread_mutex_trylock interface to avoid deadlocking in this case. If you are already holding locks and pthread_mutex_trylock is successful, then you can proceed. If it can't acquire the lock, however, you can release the locks you already hold, clean up, and try again later. ExampleIn this example, we update Figure 11.10 to show the use of two mutexes. We avoid deadlocks by ensuring that when we need to acquire two mutexes at the same time, we always lock them in the same order. The second mutex protects a hash list that we use to keep track of the foo data structures. Thus, the hashlock mutex protects both the fh hash table and the f_next hash link field in the foo structure. The f_lock mutex in the foo structure protects access to the remainder of the foo structure's fields. Comparing Figure 11.11 with Figure 11.10, we see that our allocation function now locks the hash list lock, adds the new structure to a hash bucket, and before unlocking the hash list lock, locks the mutex in the new structure. Since the new structure is placed on a global list, other threads can find it, so we need to block them if they try to access the new structure, until we are done initializing it. The foo_find function locks the hash list lock and searches for the requested structure. If it is found, we increase the reference count and return a pointer to the structure. Note that we honor the lock ordering by locking the hash list lock in foo_find before foo_hold locks the foo structure's f_lock mutex. Now with two locks, the foo_rele function is more complicated. If this is the last reference, we need to unlock the structure mutex so that we can acquire the hash list lock, since we'll need to remove the structure from the hash list. Then we reacquire the structure mutex. Because we could have blocked since the last time we held the structure mutex, we need to recheck the condition to see whether we still need to free the structure. If another thread found the structure and added a reference to it while we blocked to honor the lock ordering, we simply need to decrement the reference count, unlock everything, and return. This locking is complex, so we need to revisit our design. We can simplify things considerably by using the hash list lock to protect the structure reference count, too. The structure mutex can be used to protect everything else in the foo structure. Figure 11.12 reflects this change. Note how much simpler the program in Figure 11.12 is compared to the program in Figure 11.11. The lock-ordering issues surrounding the hash list and the reference count go away when we use the same lock for both purposes. Multithreaded software design involves these types of tradeoffs. If your locking granularity is too coarse, you end up with too many threads blocking behind the same locks, with little improvement possible from concurrency. If your locking granularity is too fine, then you suffer bad performance from excess locking overhead, and you end up with complex code. As a programmer, you need to find the correct balance between code complexity and performance, and still satisfy your locking requirements. Figure 11.11. Using two mutexes#include <stdlib.h> #include <pthread.h> #define NHASH 29 #define HASH(fp) (((unsigned long)fp)%NHASH) struct foo *fh[NHASH]; pthread_mutex_t hashlock = PTHREAD_MUTEX_INITIALIZER; struct foo { int f_count; pthread_mutex_t f_lock; struct foo *f_next; /* protected by hashlock */ int f_id; /* ... more stuff here ... */ }; struct foo * foo_alloc(void) /* allocate the object */ { struct foo *fp; int idx; if ((fp = malloc(sizeof(struct foo))) != NULL) { fp->f_count = 1; if (pthread_mutex_init(&fp->f_lock, NULL) != 0) { free(fp); return(NULL); } idx = HASH(fp); pthread_mutex_lock(&hashlock); fp->f_next = fh[idx]; fh[idx] = fp->f_next; pthread_mutex_lock(&fp->f_lock); pthread_mutex_unlock(&hashlock); /* ... continue initialization ... */ pthread_mutex_unlock(&fp->f_lock); } return(fp); } void foo_hold(struct foo *fp) /* add a reference to the object */ { pthread_mutex_lock(&fp->f_lock); fp->f_count++; pthread_mutex_unlock(&fp->f_lock); } struct foo * foo_find(int id) /* find an existing object */ { struct foo *fp; int idx; idx = HASH(fp); pthread_mutex_lock(&hashlock); for (fp = fh[idx]; fp != NULL; fp = fp->f_next) { if (fp->f_id == id) { foo_hold(fp); break; } } pthread_mutex_unlock(&hashlock); return(fp); } void foo_rele(struct foo *fp) /* release a reference to the object */ { struct foo *tfp; int idx; pthread_mutex_lock(&fp->f_lock); if (fp->f_count == 1) { /* last reference */ pthread_mutex_unlock(&fp->f_lock); pthread_mutex_lock(&hashlock); pthread_mutex_lock(&fp->f_lock); /* need to recheck the condition */ if (fp->f_count != 1) { fp->f_count--; pthread_mutex_unlock(&fp->f_lock); pthread_mutex_unlock(&hashlock); return; } /* remove from list */ idx = HASH(fp); tfp = fh[idx]; if (tfp == fp) { fh[idx] = fp->f_next; } else { while (tfp->f_next != fp) tfp = tfp->f_next; tfp->f_next = fp->f_next; } pthread_mutex_unlock(&hashlock); pthread_mutex_unlock(&fp->f_lock); pthread_mutex_destroy(&fp->f_lock); free(fp); } else { fp->f_count--; pthread_mutex_unlock(&fp->f_lock); } } Figure 11.12. Simplified locking#include <stdlib.h> #include <pthread.h> #define NHASH 29 #define HASH(fp) (((unsigned long)fp)%NHASH) struct foo *fh[NHASH]; pthread_mutex_t hashlock = PTHREAD_MUTEX_INITIALIZER; struct foo { int f_count; /* protected by hashlock */ pthread_mutex_t f_lock; struct foo *f_next; /* protected by hashlock */ int f_id; /* ... more stuff here ... */ }; struct foo * foo_alloc(void) /* allocate the object */ { struct foo *fp; int idx; if ((fp = malloc(sizeof(struct foo))) != NULL) { fp->f_count = 1; if (pthread_mutex_init(&fp->f_lock, NULL) != 0) { free(fp); return(NULL); } idx = HASH(fp); pthread_mutex_lock(&hashlock); fp->f_next = fh[idx]; fh[idx] = fp->f_next; pthread_mutex_lock(&fp->f_lock); pthread_mutex_unlock(&hashlock); /* ... continue initialization ... */ } return(fp); } void foo_hold(struct foo *fp) /* add a reference to the object */ { pthread_mutex_lock(&hashlock); fp->f_count++; pthread_mutex_unlock(&hashlock); } struct foo * foo_find(int id) /* find a existing object */ { struct foo *fp; int idx; idx = HASH(fp); pthread_mutex_lock(&hashlock); for (fp = fh[idx]; fp != NULL; fp = fp->f_next) { if (fp->f_id == id) { fp->f_count++; break; } } pthread_mutex_unlock(&hashlock); return(fp); } void foo_rele(struct foo *fp) /* release a reference to the object */ { struct foo *tfp; int idx; pthread_mutex_lock(&hashlock); if (--fp->f_count == 0) { /* last reference, remove from list */ idx = HASH(fp); tfp = fh[idx]; if (tfp == fp) { fh[idx] = fp->f_next; } else { while (tfp->f_next != fp) tfp = tfp->f_next; tfp->f_next = fp->f_next; } pthread_mutex_unlock(&hashlock); pthread_mutex_destroy(&fp->f_lock); free(fp); } else { pthread_mutex_unlock(&hashlock); } } ReaderWriter LocksReaderwriter locks are similar to mutexes, except that they allow for higher degrees of parallelism. With a mutex, the state is either locked or unlocked, and only one thread can lock it at a time. Three states are possible with a readerwriter lock: locked in read mode, locked in write mode, and unlocked. Only one thread at a time can hold a readerwriter lock in write mode, but multiple threads can hold a readerwriter lock in read mode at the same time. When a readerwriter lock is write-locked, all threads attempting to lock it block until it is unlocked. When a readerwriter lock is read-locked, all threads attempting to lock it in read mode are given access, but any threads attempting to lock it in write mode block until all the threads have relinquished their read locks. Although implementations vary, readerwriter locks usually block additional readers if a lock is already held in read mode and a thread is blocked trying to acquire the lock in write mode. This prevents a constant stream of readers from starving waiting writers. Readerwriter locks are well suited for situations in which data structures are read more often than they are modified. When a readerwriter lock is held in write mode, the data structure it protects can be modified safely, since only one thread at a time can hold the lock in write mode. When the readerwriter lock is held in read mode, the data structure it protects can be read by multiple threads, as long as the threads first acquire the lock in read mode. Readerwriter locks are also called sharedexclusive locks. When a readerwriter lock is read-locked, it is said to be locked in shared mode. When it is write-locked, it is said to be locked in exclusive mode. As with mutexes, readerwriter locks must be initialized before use and destroyed before freeing their underlying memory.
A readerwriter lock is initialized by calling pthread_rwlock_init. We can pass a null pointer for attr if we want the readerwriter lock to have the default attributes. We discuss readerwriter lock attributes in Section 12.4. Before freeing the memory backing a readerwriter lock, we need to call pthread_rwlock_destroy to clean it up. If pthread_rwlock_init allocated any resources for the readerwriter lock, pthread_rwlock_destroy frees those resources. If we free the memory backing a readerwriter lock without first calling pthread_rwlock_destroy, any resources assigned to the lock will be lost. To lock a readerwriter lock in read mode, we call pthread_rwlock_rdlock. To write-lock a readerwriter lock, we call pthread_rwlock_wrlock. Regardless of how we lock a readerwriter lock, we can call pthread_rwlock_unlock to unlock it.
Implementations might place a limit on the number of times a readerwriter lock can be locked in shared mode, so we need to check the return value of pthread_rwlock_rdlock. Even though pthread_rwlock_wrlock and pthread_rwlock_unlock have error returns, we don't need to check them if we design our locking properly. The only error returns defined are when we use them improperly, such as with an uninitialized lock, or when we might deadlock by attempting to acquire a lock we already own. The Single UNIX Specification also defines conditional versions of the readerwriter locking primitives.
When the lock can be acquired, these functions return 0. Otherwise, they return the error EBUSY. These functions can be used in situations in which conforming to a lock hierarchy isn't enough to avoid a deadlock, as we discussed previously. ExampleThe program in Figure 11.13 illustrates the use of readerwriter locks. A queue of job requests is protected by a single readerwriter lock. This example shows a possible implementation of Figure 11.1, whereby multiple worker threads obtain jobs assigned to them by a single master thread. In this example, we lock the queue's readerwriter lock in write mode whenever we need to add a job to the queue or remove a job from the queue. Whenever we search the queue, we grab the lock in read mode, allowing all the worker threads to search the queue concurrently. Using a readerwriter lock will improve performance in this case only if threads search the queue much more frequently than they add or remove jobs. The worker threads take only those jobs that match their thread ID off the queue. Since the job structures are used only by one thread at a time, they don't need any extra locking. Figure 11.13. Using readerwriter locks#include <stdlib.h> #include <pthread.h> struct job { struct job *j_next; struct job *j_prev; pthread_t j_id; /* tells which thread handles this job */ /* ... more stuff here ... */ }; struct queue { struct job *q_head; struct job *q_tail; pthread_rwlock_t q_lock; }; /* * Initialize a queue. */ int queue_init(struct queue *qp) { int err; qp->q_head = NULL; qp->q_tail = NULL; err = pthread_rwlock_init(&qp->q_lock, NULL); if (err != 0) return(err); /* ... continue initialization ... */ return(0); } /* * Insert a job at the head of the queue. */ void job_insert(struct queue *qp, struct job *jp) { pthread_rwlock_wrlock(&qp->q_lock); jp->j_next = qp->q_head; jp->j_prev = NULL; if (qp->q_head != NULL) qp->q_head->j_prev = jp; else qp->q_tail = jp; /* list was empty */ qp->q_head = jp; pthread_rwlock_unlock(&qp->q_lock); } /* * Append a job on the tail of the queue. */ void job_append(struct queue *qp, struct job *jp) { pthread_rwlock_wrlock(&qp->q_lock); jp->j_next = NULL; jp->j_prev = qp->q_tail; if (qp->q_tail != NULL) qp->q_tail->j_next = jp; else qp->q_head = jp; /* list was empty */ qp->q_tail = jp; pthread_rwlock_unlock(&qp->q_lock); } /* * Remove the given job from a queue. */ void job_remove(struct queue *qp, struct job *jp) { pthread_rwlock_wrlock(&qp->q_lock); if (jp == qp->q_head) { qp->q_head = jp->j_next; if (qp->q_tail == jp) qp->q_tail = NULL; } else if (jp == qp->q_tail) { qp->q_tail = jp->j_prev; if (qp->q_head == jp) qp->q_head = NULL; } else { jp->j_prev->j_next = jp->j_next; jp->j_next->j_prev = jp->j_prev; } pthread_rwlock_unlock(&qp->q_lock); } /* * Find a job for the given thread ID. */ struct job * job_find(struct queue *qp, pthread_t id) { struct job *jp; if (pthread_rwlock_rdlock(&qp->q_lock) != 0) return(NULL); for (jp = qp->q_head; jp != NULL; jp = jp->j_next) if (pthread_equal(jp->j_id, id)) break; pthread_rwlock_unlock(&qp->q_lock); return(jp); } Condition VariablesCondition variables are another synchronization mechanism available to threads. Condition variables provide a place for threads to rendezvous. When used with mutexes, condition variables allow threads to wait in a race-free way for arbitrary conditions to occur. The condition itself is protected by a mutex. A thread must first lock the mutex to change the condition state. Other threads will not notice the change until they acquire the mutex, because the mutex must be locked to be able to evaluate the condition. Before a condition variable is used, it must first be initialized. A condition variable, represented by the pthread_cond_t data type, can be initialized in two ways. We can assign the constant PTHREAD_COND_INITIALIZER to a statically-allocated condition variable, but if the condition variable is allocated dynamically, we can use the pthread_cond_init function to initialize it. We can use the pthread_mutex_destroy function to deinitialize a condition variable before freeing its underlying memory.
Unless you need to create a conditional variable with nondefault attributes, the attr argument to pthread_cond_init can be set to NULL. We will discuss condition variable attributes in Section 12.4. We use pthread_cond_wait to wait for a condition to be true. A variant is provided to return an error code if the condition hasn't been satisfied in the specified amount of time.
The mutex passed to pthread_cond_wait protects the condition. The caller passes it locked to the function, which then atomically places the calling thread on the list of threads waiting for the condition and unlocks the mutex. This closes the window between the time that the condition is checked and the time that the thread goes to sleep waiting for the condition to change, so that the thread doesn't miss a change in the condition. When pthread_cond_wait returns, the mutex is again locked. The pthread_cond_timedwait function works the same as the pthread_cond_wait function with the addition of the timeout. The timeout value specifies how long we will wait. It is specified by the timespec structure, where a time value is represented by a number of seconds and partial seconds. Partial seconds are specified in units of nanoseconds: struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ }; Using this structure, we need to specify how long we are willing to wait as an absolute time instead of a relative time. For example, if we are willing to wait 3 minutes, instead of translating 3 minutes into a timespec structure, we need to translate now + 3 minutes into a timespec structure. We can use gettimeofday (Section 6.10) to get the current time expressed as a timeval structure and translate this into a timespec structure. To obtain the absolute time for the timeout value, we can use the following function: void maketimeout(struct timespec *tsp, long minutes) { struct timeval now; /* get the current time */ gettimeofday(&now); tsp->tv_sec = now.tv_sec; tsp->tv_nsec = now.tv_usec * 1000; /* usec to nsec */ /* add the offset to get timeout value */ tsp->tv_sec += minutes * 60; } If the timeout expires without the condition occurring, pthread_cond_timedwait will reacquire the mutex and return the error ETIMEDOUT. When it returns from a successful call to pthread_cond_wait or pthread_cond_timedwait, a thread needs to reevaluate the condition, since another thread might have run and already changed the condition. There are two functions to notify threads that a condition has been satisfied. The pthread_cond_signal function will wake up one thread waiting on a condition, whereas the pthread_cond_broadcast function will wake up all threads waiting on a condition.
When we call pthread_cond_signal or pthread_cond_broadcast, we are said to be signaling the thread or condition. We have to be careful to signal the threads only after changing the state of the condition. ExampleFigure 11.14 shows an example of how to use condition variables and mutexes together to synchronize threads. The condition is the state of the work queue. We protect the condition with a mutex and evaluate the condition in a while loop. When we put a message on the work queue, we need to hold the mutex, but we don't need to hold the mutex when we signal the waiting threads. As long as it is okay for a thread to pull the message off the queue before we call cond_signal, we can do this after releasing the mutex. Since we check the condition in a while loop, this doesn't present a problem: a thread will wake up, find that the queue is still empty, and go back to waiting again. If the code couldn't tolerate this race, we would need to hold the mutex when we signal the threads. Figure 11.14. Using condition variables#include <pthread.h> struct msg { struct msg *m_next; /* ... more stuff here ... */ }; struct msg *workq; pthread_cond_t qready = PTHREAD_COND_INITIALIZER; pthread_mutex_t qlock = PTHREAD_MUTEX_INITIALIZER; void process_msg(void) { struct msg *mp; for (;;) { pthread_mutex_lock(&qlock); while (workq == NULL) pthread_cond_wait(&qready, &qlock); mp = workq; workq = mp->m_next; pthread_mutex_unlock(&qlock); /* now process the message mp */ } } void enqueue_msg(struct msg *mp) { pthread_mutex_lock(&qlock); mp->m_next = workq; workq = mp; pthread_mutex_unlock(&qlock); pthread_cond_signal(&qready); } |