Newsgroups: comp.arch
Path: utzoo!utgpu!watserv1!watdragon!watsol.waterloo.edu!tbray
From: tbray@watsol.waterloo.edu (Tim Bray)
Subject: Re: Code to Data ratio
Message-ID: <1991May28.001111.22216@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (News Owner)
Organization: University of Waterloo
References: <13179@pt.cs.cmu.edu> <11397@skye.cs.ed.ac.uk> <49500@ut-emx.uucp>
Date: Tue, 28 May 1991 00:11:11 GMT
Lines: 23

video@ccwf.cc.utexas.edu (Henry J. Cobb) writes:
 	Data streams through our systems, and is either temporal (and
 therefore compact) or part of a large database that can be retrieved a piece
 at a time from disk.
 	When we buy DRAMs, they are for code, not data.  And shared libraries
 will become ever more useful.

If Mr. Cobb is using "we" in the sense of "us down here at U Texas", he may
be perfectly correct.  If he means "the computing community" he's probably
wrong, in general.  While it is the case that much memory is burned
supporting multiprogramming code bloat and the GUIs of this world, there are
*lots* of applications that make profitable use of large random access data
structures; in this class I would include most of AI, all of symbolic
algebra, and many database I/O optimization strategies.  In particular, for
large databases, it is often wise to go to great lengths, and consume lots of
RAM, in order to avoid retrieval "one at a time from disk".

I also think that a very general case could be made that as user interfaces
evolve, they maintain more and more state information to help minimize the 
load on the user in a variety of useful ways.  So bigger may in fact in 
general be better...

Cheers, Tim Bray, Open Text Systems
