Newsgroups: comp.arch
Path: utzoo!utgpu!dennis
From: dennis@gpu.utcs.utoronto.ca (Dennis Ferguson)
Subject: Re: Sun bogosities, including MMU thrashing
Message-ID: <1991Jan21.225211.17757@gpu.utcs.utoronto.ca>
Organization: very little
References: <5257@auspex.auspex.com> <3956@skye.ed.ac.uk> <PCG.91Jan18142616@teachk.cs.aber.ac.uk> <5390@auspex.auspex.com> <PCG.91Jan21160353@odin.cs.aber.ac.uk>
Date: Mon, 21 Jan 91 22:52:11 GMT

In article <PCG.91Jan21160353@odin.cs.aber.ac.uk> pcg@cs.aber.ac.uk (Piercarlo Grandi) writes:
>On 20 Jan 91 20:13:55 GMT, guy@auspex.auspex.com (Guy Harris) said:
>
>pcg> [ ... raising the default block size is not a clever move ... ]
>
>pcg> It also has some big disadvantages, hinted at by Thompson&Ritchie in
>pcg> the V7 papers (they advised against doubling the block size from 1 to
>pcg> 2 sectors with terse and cogent reasoning).
>
>guy> Which paper was that?
>
>One of "Unix Implementation" or "Unix IO system", or "A retrospective".

While I'm unwilling to dig through the references to determine whether
Thompson and Ritchie actually said this, if they did I do think they may
have changed their minds about it.  The file system used (on Vaxes) by
Version 8 was essentially the V7 file system with the block size increased
to 4096.  If memory serves, the stated reason this was done was simply
that it made the file system run 8 times faster under typical loads.
And I distinctly remember arguments being made at the time to the effect
that the speed of the Berkeley fast file system (still a fairly recent
innovation then) was almost exclusively due to the larger block size, and
that the block clustering algorithm, which makes the supporting code complex
and relatively CPU-intensive when writing, really was unnecessary.

While I can't judge the merit of the rest of your arguments, I do think
that the documentation with later versions of research Unix not only doesn't
support your opinion, but rather directly contradicts it.

Dennis Ferguson
