Subj : Re: Challenge: Multithreading & Synchronization To : comp.programming.threads From : Uenal Mutlu Date : Thu May 19 2005 01:05 pm "David Butenhof" wrote > Uenal Mutlu wrote: > > "David Schwartz" wrote > > > >> It is not the number of threads, but the forced association between a > >>thread and something outside the program that has a long duration, such as a > >>connection or client. > > > > How can you make such an assumption? It is nowehere stated. > > It depends on the session protocol, and it's up to the threadproc > > and/or the calling (ie. main) thread to handle such things, > > for example: closing a session after x seconds inactivity etc. > > (unless you mean something different). > > He can make this assumption because you said so. In your original > "challenge" post: > > 2) It uses a thread per connected client. > > THIS is what he's talking about. Forced association between client > context and the thread execution entity is bad design. It makes your > client connections interdependent, competing for shared resources that > aren't relevant. Threads CANNOT effectively compete with each other, > because they share too much state, like memory manager and I/O subsystem > locks. You cannot easily maintain any reasonable server concept of > "fair" client response in this model; and if you try, it will be by > excessive arbitrary synchronization that will cost you an enormous > number of otherwise pointless context switches. It's just memory. Ie. very fast. And: such clients mostly do just sleep (imagine the job of a web server). > A client context is an entity. Your server operations take place in > threads. If you have 128 processors, you may want 128 threads; perhaps > even more depending on the I/O architecture. But the key is that the > threads COOPERATE (not compete) to meet client needs by sharing the > workload according to server throughput and service goals. And that > means separating the execution context from the client context in some > less trivialized model, such as a "work crew" serving shared (and > probably prioritized) client queues. How would you serve 1000 clients? Use 1000 processors? > You have made the fundamental error of defining "thread == client". This > is a bad design, and you cannot fix the bad design by ADDING to it; only > by eliminating the error and starting over. Before that I had tried the following (in a sockets communication application): 1) Put all requests coming in from the connected sockets into a big queue and let multiple threads (nbr configurable) process this queue. 2) Then I tried using a thread per connected client. The overall performance of the latter was better than the performance of the first approach. This is so because the queue operations had a higher overhead than the usual context switching overhead. Additionally, the programming was much easier with the second case, because you just concentrate on _this_ client only, no need to know what else is there, ie. like in a singlethreaded application. So, because in such applications one can say that any average PC server machine should not serve about more than 2000 clients (FYI: a fast Windows PC can handle more than 5000 threads if it has got more than 2 GB RAM; it's an issue of thread-stack-size --> avail memory, and the CPU speed), I chose the second. I decided that "a thread per client" is sufficient, because in our case the average expected nbr of simultanously served clients was much less than that (250 to 500). Yet, if one day should there be more then 2000 clients to serve at the same time then simply upgrade the machine (esp. CPU and mem) or add another machine and put these machines into a local multicast (or broadcast) group for exchanging status information, doing load-balancing, share internal databases etc.) Then I would also have used a protocol like MPI. To summarize: For normal PC server uses the "a thread per client" solution is very much sufficient, and it very much simplifies the development. So, my decision for "a thread per client" is intended for average PC servers. For mainframes I cannot make an estimate because I haven't worked with any of such monster machines of the present, ...should they still exist... :-) .