Newsgroups: comp.arch
Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!wuarchive!rice!news
From: cliffc@libya.rice.edu (Cliff Click)
Subject: Extremely Large Files
Message-ID: <1990Aug9.145734.14864@rice.edu>
Sender: news@rice.edu (News)
Organization: Rice University, Houston
References: <13667@cbmvax.commodore.com> <5286@mace.cc.purdue.edu> <14923@cgl.ucsf.EDU>
Date: Thu, 9 Aug 90 14:57:34 GMT

I think the large file folks don't need more address bits, they need a
better addressing scheme.  The use of infinite-precision integers in the
applications will let them write code that can handle any size file.  
Then it's the OS folks' job to understand seek(1 Trillion), and translate
that to a disk sector, or read/write/load that between memory and disk.

If you want to say "malloc(1 Trillion)" that's a slightly different problem;
here your requesting more virtual memory than you have.  This is a language
implementors' problem:  you *can* deal with a name address space larger
than you physical virtual-address-space (more than 32 bits), it's just slower
and requires more smarts.  Object oriented folks have a leg up here, and IBM
PC folks have had to fight this fight some time ago.

Cliff Click
-- 
Cliff Click                
cliffc@owlnet.rice.edu       
