Subj : Re: virtual addresses To : comp.programming From : Brian Date : Wed Sep 28 2005 05:08 am Randy Howard wrote: > Brian wrote > (in article ): >> Randy Howard wrote: >>> Bill Cunningham wrote >>> (in article ): >> >>>> So the applications and the kernel space do not know what the real >>>> physical addresses are then. At least kernel space other than the memory >>>> manager? >> >>> That is typically true for applications (with rare exceptions), >>> but drivers and kernel code often know the physical address or >>> both, depending on what is being done down low. >> >>> If you think about it, there really isn't a reason why an >>> application needs to know the physical address. >> >>> In really high-end systems that support failover and RAID memory >>> systems, and even 'hot add' of memory to a running system, it is >>> extremely advantageous to not have apps tied to physical >>> addresses. >> >> There's one odd thing about virtual memory that is unexplainable. > Explaining and comprehending are not the same thing. Nor is paper waving the same as QED. >> Why haven't we outgrown page files / swap files? It was an early >> trick and kludge was it not? > Because applications constantly want more memory for themselves > than is available. Plus, as I indicated above, there are some > very good reasons for the existence of virtual addressing, /even > if/ you have sufficient physical memory. Separating address space and paging are key here. Are we talking apples and apples? I agree that a flat address space per process is a key feature that is very much needed. Page tables are needed. I don't believe cycling pages to disk is strictly necessary. >> My first XP install was on a PIII with 256MB of memory. I suppose >> the page file was an additional 256MB. The computer had a practical >> upper memory limit of 512MB. > All you had to do was use a larger swap file. In fact, if you > left XP alone to manage it's own virtual memory, it would have. Who said I modified it? Generally the swap/pf is the same size as the physical memory. This is a rule of thumb on several OSes. >> I have the same version of XP running on a P4 with 2GB of memory >> today. 2GB is approximately 4x my original system's TOTAL available >> memory including it's page file. > Irrelevant. If you try to load up applications totalling more > than 2GB of memory consumption (including OS), then you still > need a swap file, or you run the risk of crash-o-rama (ahem). This is ignoring the entire fact that memory limitations vary between 128MB and 4GB for the same exact Windows 32-bit system. Certainly the practical virtual memory space of the first system would fit conveniently into the second /without/ paging. I'd suggest most users would be hard pressed to tell the difference between a 512MB WinXP system and a 2GB WinXP system. So again, plenty of memory. Java developers frequently avoid filling system memory for fears of garbage collection pauses. In fact, I sat in a seminar recently where the system admins learned to their chagrin that the extra 14GB in their BEA servers was to go unused. The 2GB's tuned for BEA WebLogic was about enough to prevent pauses. So the extra memory is now a space heater. Memory is probably growing at a faster rate than flops. >> In other words, I have plenty o' memory without the disk. So why >> is the page file still there? > Because you don't have enough memory. You *think* you have > enough, you can't predict how much memory you will need. On > some systems, 16GB of RAM is insufficient. I offer that I use XP with the same OS version, "nearly" the same software, the same OO paradigm that's been around for the past 10 years or more. Microsoft's stuff has been Win32 and MFC (wrapper) for ages, hasn't it? Same libraries. Same OS. How does this add up to bigger apps exactly? Office still fits on the same ISO cd it did in 1995. Meanwhile, memory has grown by something pretty close to Moore's law. In 1995 16 MB was aggressive. Using Moore's law, 16*2^5 equals 512 MB today. On paper. But I'm seeing 2GB being used quite commonly. Memory has actually grown faster than Moore's law. Exponential memory growth. Constant or linear application growth. You can see why I don't buy the argument. > Either way, virtual memory provides some advantages even in > cases where you have more physical memory than your peak usage > (which you can't ever guarantee). So your saying that it's always better to map virtual memory to magnetic media versus system memory? I can't see how that would ever be true. The entire question boils down to this. What type of memory is faster? Is there enough of it? And I think that, perhaps on a tunable level, system memory is plentiful enough to do away with the hard disk. The problem is, whenever the swap file is eliminated, performance does indeed suffer. The only reasonable explanation is that the virtual memory manager degrades at the boundary case - similar to zip compression creating a bigger archive file than the raw input. That's the part that hasn't been explained to me. I'll dive into Linux or OpenSolaris one day and figure it out -- assuming swap files aren't eliminated in the meantime. .