Newsgroups: news.software.nntp
Path: utzoo!utstat!helios.physics.utoronto.ca!news-server.csri.toronto.edu!mailrus!b-tech!zeeff
From: zeeff@b-tech.ann-arbor.mi.us (Jon Zeeff)
Subject: Re: improved nntpxfer
Message-ID: <+&*$DG_@b-tech.uucp>
Organization: Branch Technology
References: <X?#$?!*@b-tech.uucp> <1990Mar14.231020.5784@smurf.sub.org>
Date: Sun, 18 Mar 90 16:51:40 GMT

>< I wanted to use nntpxfer, so I made some improvements - basically more 
>< efficient reads and batching of articles into memory and then popening 
>< rnews when the buffer gets full.  Let me know if you want to beta test 
>< a copy.  
>< 
>I decided to go the simpler route and, instead of popen()ing inews, just
>create a temporary file and rename it to "/usr/spool/news/xfer.%d.%d",
>getpid(), counter++ (and let the next rnews -U deal with it).

This does seem slightly simpler, but I'd be concerned about how often
rnews -U gets run.  You want to keep the delays to a minimum.  Also, some
systems (like this one), don't have an rnews -U or anything equivalent.


>Another improvement is to open two NNTP channels to your favorite server. On
>one, you do your NEWNEWS, and the other is used to fetch articles as soon as
>their IDs come in over the first channel.
>This is necessary on some low-speed Internet links like ours (which frequently
>makes nntpd time out, drops connections, and other fun stuff) and basically
>enabled us to get 24 hours of Usenet traffic in 14 hours instead of 30.
>

In my experience, once the id's start coming over, they all make it pretty 
quickly.  You still have to use the old "last success time" if the second
connection fails, meaning you have to start over again on the next connection.

Another solution would be to allow multiple lines in the 
/usr/spool/news/nntp.site file and have nntpxfer cycle through them.  You
could then break it up a bit, eg:

rec time time
soc time time
comp time time


