Subj : Re: Windows issues To : Deuce From : Angus Mcleod Date : Sat Jun 11 2005 12:47 am Re: Re: Windows issues By: Deuce to Angus Mcleod on Fri Jun 10 2005 22:11:00 > > Well, that is a matter for debate, but it is a fairly reasonable claim. > > If all of your programs, all together (OS too), including code, data, > > buffers, drivers and etc, come to (say) 200 meg of RAM.... > A simpler explination goes something like this... > You have 128MB of RAM... you are running two processes that each consume 64M > or RAM. One process is very active and the other is idle. The active proce > is doing many random read/writes to the disk. > > Without paging, this means that none of the IO gets caches because there is > free RAM. Sure, but I was assuming that the space used by each of your processes included some buffer areas, so you wouldn't actually be stuck in this position. The amount of space needed for efficient buffering will only ever be a guestimate. > If the idle task gets paged out, you then have 64MB of cache to > buffer the disk IO in which WILL increase performace. When you switch back > the idle task, there will be a noticeable flood of page faults, resulting in > something that LOOKS like poor performace but in actual fact gave BETTER > performace overall. I think this is part of how these so-called optimizers fiddle with the metric to appear to be doing something good. If your criterion is "Free RAM" then they may actually generate increases in free RAM at the expense of some other area of memory management that is actually already working well. If they can manipulate the OS into paging out more of an active program, you will apparently get more free RAM. But at the cost of increasing the number/frequency of page faults on active processes. Your RAM-O-Meter shows a big increase, but your active processes actually slow down, until that free RAM is re-committed by the OS. --- þ Synchronet þ Cry "Softly" then hit hard at The ANJO BBS .