Newsgroups: can.usrgroup
Path: utzoo!lsuc!eci386!jmm
From: jmm@eci386.uucp (John Macdonald)
Subject: Re: Is it the interleave?
Message-ID: <1989Oct12.202334.23282@eci386.uucp>
Reply-To: jmm@eci386.UUCP (John Macdonald)
Organization: R. H. Lathwell Associates: Elegant Communications, Inc.
References: <891006074552.5152@tmsoft.uucp> <695@ecicrl.UUCP> <1989Oct8.192745.20807@tmsoft.uucp> <1989Oct10.221219.3571@eci386.uucp>
Distribution: ont
Date: Thu, 12 Oct 89 20:23:34 GMT
Lines: 96

In article <1989Oct10.221219.3571@eci386.uucp> clewis@eci386.UUCP (Chris Lewis) writes:
|
|We (well, in a previous incarnation, my colleagues (including to a certain
|extent John Macdonald - I did the Spectrix DPT driver...)) implemented 
|driver-based track buffering on the Spectrix 68020 Xenix systems, and 
|discovered a pretty substantial performance improvement
|*on average*.  Provided that the controller (or driver) manages to
|keep a *couple* (or more) full tracks of data around as a next level
|of buffer cache, UNIX's tendency to read disk blocks sequentially
|(even in archaic file system layouts like Xenix III) will get you a big win.
|
|John Macdonald may remember some of the performance statistics from
|the XL.

I'm afraid I don't remember any specific figures.  It was not (necessarily)
using physical tracks.  We used "logical" tracks - a block of "k" sequential
sectors aligned on a "k"-sector boundary.  If "k" happened to be the same
number as there were physical sectors per track, then it was buffering
physical tracks.  Even if a system was being used for just one purpose, it
still was very valuable to have more than a few track buffers, and when
many users were on then many buffers were needed.  It generally turned out
more valuable to allocate "extra" memory to track buffers instead of to
Unix buffer cache.  It also turned out to be even more valuable than usual
to frequently do an fsck to reorder the free list - keeping the free list
sorted means that files will have larger portions of their data in contiguous
sectors (or close to it).  As I recall, we used about 1 Meg for track buffers
and about 1/2 Meg for buffer cache.

|As a canonical *best* case, just imagine a dd of a blocked disk, *one*
|rotation per track read.  Which is on the order of 6 times faster than
|systems that don't have or can't fully utilize 1:1 interleave.

Note that even with Unix's internal read-ahead you still can't usually
use 1:1 interleave (often far less) - by the time the controller finshes
reading a sector, transfers it to the system's memory, interrupts the
processor, and informs it that the transfer is complete, and then the
device driver tells Unix that the transfer is complete, Unix notices
that this is a sequential file and so a read-ahead of the next sector
is in order, issues the read request to the device driver, the device
driver prepares the control blocks, tells the controller to use them,
the controller gets the control blocks and interprets them to find that
it is being asked for the next sector (whew); well after all that has
happened, the disk has certainly revolved the few bytes between the end
of the one sector and the start of the next (and probably it has rotated
past a number of complete sectors too).  With the track buffering, one
request is sent to the controller for the group of sectors, so it is
only delays internal to the controller that could cause it to be able
to handle 1:1 interleave (this is not uncommon, mind you).

Where track buffering loses is when you *don't* ever use any of the
other sectors in the logical track.  Then you have had a somewhat
slower time for the original track and no subsequent gain on the
(non-existent) later ones.  The time penalty for the first sector
is the rotational time to read the extra sectors in the logical track.
Some sample time calculations, using a Maxtor 1140 with logical track
size of 9K (half a physical track):

random one-sector (512 bytes) read (i.e. first read in a new place):
	seek:	20 ms.
	rotate: 8 1/3 ms. (avg half rotation)
	read:   1/2 ms. (one sector = 1/35 of a track)
	total:  29 ms.

subsequent one-sector read (i.e. previous read was in same physical track):
	seek:   0
	rotate: 8 1/3 ms. (avg half rotation)
	read:   1/2 ms.
	total:  9 ms.

random track buffer (9K bytes) read (i.e. first read in a new place):
	seek:   20 ms.
	rotate: 8 1/3 ms.
	read:   9 ms. (18 sectors = 18/35 track
	total:  37 ms.

Thus, the track buffering read adds about 33% to the first read to an
area, but then the next 17 reads in the same area are free, instead of
1/3 price.  If two sectors are used, then track buffering breaks even,
and if more are used then it wins.

Note that the cheap subsequent read (non-track buffering) only occurs
if no other disk activity occurs in between - whereas the track buffering
subsequent reads are essentially free unless so many different areas are
accessed that the buffer cannot be retained.

|Canonical *worst* case is the 53 Ms. average that you calculated.
|
|On the other hand, the DPT left the track buffering Spectrix driver
|in the dust....  Woof!

Mind you, we never got around to implementing track buffering in the
DPT device driver - it might have provided some additional advantage.
-- 
"Software and cathedrals are much the same -          | John Macdonald
first we build them, then we pray" (Sam Redwine)      |   jmm@eci386
