Newsgroups: news.software.b
Path: utzoo!henry
From: henry@utzoo.uucp (Henry Spencer)
Subject: Re: Lots of dups
Message-ID: <1989Oct27.161302.5026@utzoo.uucp>
Organization: U of Toronto Zoology
References: <1989Oct25.164024.14894@ctr.columbia.edu> <1989Oct25.205129.16397@brutus.cs.uiuc.edu> <1989Oct26.164042.4692@utzoo.uucp> <1989Oct27.012640.27706@brutus.cs.uiuc.edu>
Date: Fri, 27 Oct 89 16:13:02 GMT

In article <1989Oct27.012640.27706@brutus.cs.uiuc.edu> coolidge@cs.uiuc.edu writes:
>2) starting just one relaynews per pass over the incoming directory,
>rather then one per file as the default newsrun does. In any case,
>my system (write each incoming article as a separate file, then
>feed them all down a pipe into one relaynews) seems to have done
>a great job...

In general, the bigger the lump you can feed to relaynews, the better.
The really-ambitious could "cat * | relaynews" (more or less) rather than
using the "for" loop of the current "newsrun".  One thing you do lose,
though, is error recovery -- it is no longer possible to localize problems
to a single file.  One can imagine a hybrid approach that would pour it
all in, and then *if* there was an error message, back off and push the
files in one at a time to see who was to blame (since feeding the same
stuff through twice is generally harmless, and also fairly quick [rejection
of duplicates is really fast]).  At the time it didn't seem worth the
trouble.
-- 
A bit of tolerance is worth a  |     Henry Spencer at U of Toronto Zoology
megabyte of flaming.           | uunet!attcan!utzoo!henry henry@zoo.toronto.edu
