Subj : Re: Threading and Timeouts To : comp.programming.threads From : Giancarlo Niccolai Date : Thu Jul 21 2005 03:42 am David Schwartz wrote: > > "Giancarlo Niccolai" wrote in message > news:dbmn30$209$1@newsread.albacom.net... > >> Ah, NOW I see your point. You say that there is no guarantee that the >> socket >> (let's focus on that) status may not change after that select() has >> decided >> it was ready. It may theoretically change also before it returns. > > Of course. This point is pacific. > >> Well, supposing the socket is used by more threads/process at a time, >> without coordination yes. > > Every socket is used by at least two things, the local end and the > remote end. A TCP connection can change at any time due to packets > received over the network. A UDP socket can change because an interface > can go up or down. I want to underline that this has nothing to do with the standard. This class of problem may seem the same of read/write consistency problems in multithreaded environment, but only under some conditions that I am willing to exclude. I.e. a read consistency problem in MT is physiological of the MT environment, while the change of the readiness status of a socket inbetween two operations of a certain process (i.e. a select() and a subquesent i/o) is a pathological situation, although not excluded by the standard, and so, theoretically possible. If you see a ready-for-read condition from one socket, it can be only ready-for-read again when you read it. It may be pushed anything from the other side, or it can be closed, or the interface may be set on fire on the meanwhile, yet, in the real world, unless you make deliberate actions to destroy its readability from somewhere else, the data sitting there for you to be read will still be there for you to be read after a while. [I agree, this is not in the standard, and I agree, it is not mathematically correct, but read on]. Same for writing. A socket that is ready for writing and that WON'T be used by any other process in your machine, will be ready for writing also at a later moment, even if the interface explodes (unless the explosion destroys also the CPU, but this brings in other problems that are not defined in the standard, as i.e. how much of the latest instruction the CPU(s) is/are processing is really completed?). If the interface explodes, provided the kernel is still running, it will still accepts, without blocking, what a write() provides. I won't claim anymore this is in the standards; I just misread them. But I still confident in claiming that without erroneous programming of the system applications, the positive check resulting from a select is not going to change, and is still available for the application to use at *any* later moment. Is it correct to *base* an application on this assertion? *NO*, as it is generally not correct to believe that the system has not errors, except for what the standard guarantees (uhm, I heard Dr. Butenhof once saying that, on the practical level, it is not even necessarily correct to blindly follow the standard because the standards are ill-implemented on many platforms; yet this reasoning brings up to the point that it is mathematically demonstrable that it can't possibly be built any correct program in a real-world, that is, out of theoretic standardized environment, so I won't pursue it's full consequences). BUT, while it is not correct to *base* an application on this assumption, it is correct to *exploit* it, provided a different outcome (i.e. read() on a ready-to-read socket actually blocking) won't cause the application to mis-function; as any error condition must not cause a correct program to fail, then THIS error condition (blocking on a socket after select() has said it's ready) must be correctly dealt too, or the consequences of not dealing with it must be accurately known. Am I correct up to this point? Bests, Giancarlo Niccolai .