Subj : pthread_cond_timedwait problems To : comp.programming.threads From : markh@compro.net Date : Sat Aug 13 2005 01:27 pm I sometimes use pthread_cond_timedwait in an application for the sole purpose of delay. It does not delay (ETIMEDOUT) correctly. The sample program below shows my problem with this call. I have a hardware timer from a special pci card mapped into my task space that has microsecond resolution. This timer verifies the problem with this funtion. I have however included in the sample program below an alternative method of timing the (ETIMEDOUT) of this call using the gettimeofday call. It is not as accurate as the hardware timer but show the problem non the less. #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include int main() { volatile unsigned long *rtom_microseconds; unsigned int usecs; unsigned int rtom_han; unsigned int elapsed_usecs; struct timeval tod_end_time; struct timeval tod_start_time; struct timezone tod_zone; struct timespec delay = { 0, 10000 }; // 10 micro second delay struct timespec delay_tim; pthread_mutex_t lck = PTHREAD_MUTEX_INITIALIZER; pthread_cond_t cv = PTHREAD_COND_INITIALIZER; rtom_han = rtom_usec_map(&rtom_microseconds, 0); // mmap usec timer gettimeofday(&tod_start_time,&tod_zone); // get start time delay_tim.tv_sec = tod_start_time.tv_sec ; delay_tim.tv_nsec = (tod_start_time.tv_usec * 1000); delay_tim.tv_sec += delay.tv_sec; delay_tim.tv_nsec += delay.tv_nsec; printf("Attempting delay of %d nsec (%d usecs)\n", (uint)delay.tv_nsec, ((uint)delay.tv_nsec / 1000)); usecs = *rtom_microseconds; // get real start time while (pthread_cond_timedwait(&cv, &lck, &delay_tim) != ETIMEDOUT); elapsed_usecs = (*rtom_microseconds - usecs); gettimeofday(&tod_end_time,&tod_zone); printf("gettimeofday reported delay = %d usecs\n", (int)(tod_end_time.tv_usec - tod_start_time.tv_usec)); printf("Actual usec delay from timer = %d usecs\n", elapsed_usecs); exit(0); rtom_usec_unmap(rtom_han); } Here is the output of this program as is for a delay of 10000ns (10us). Attempting delay of 10000 nsec (10 usecs) gettimeofday reported delay = 73 usecs Actual usec delay from timer = 12 usecs That looks ok. Now if I try 100000ns (100us) by changing struct timespec delay = { 0, 100000 }; // 100 micro second delay Attempting delay of 100000 nsec (100 usecs) gettimeofday reported delay = 2120 usecs Actual usec delay from timer = 2059 usecs Its obvious it takes gettimeofday around 70 usecs to complete but both the timings are bogus. The rtom should show 100 and the gettimeofday should be around 170. Ok lets try 1000000ns (1000us) Attempting delay of 1000000 nsec (1000 usecs) gettimeofday reported delay = 2225 usecs Actual usec delay from timer = 2164 usecs Whats going on here???? I don't see anything wrong with the program. Is there something wrong here with this code or is a BUG?? Thanks Mark .