Subj : Re: Computers To : Spectre From : tenser Date : Tue Jul 23 2024 03:13:39 On 22 Jul 2024 at 08:21p, Spectre pondered and said... Sp> While I agree with your appraisal, effective 2:1 compression seemed to be Sp> pretty rare. Binaries and hard data didn't tend to compress as well as a Sp> chunk of equivalent sized text. I don't remember getting beyond ~40% Sp> compression as a general rule, and small files didn't tend to compress as Sp> well as larger ones. Yeah. It depends very much on _what_ you're compressing and how much overhead you're willing to tolerate. Still, even if you only get 30%, you're about breaking even with `uuencode`. Sp> Its kind of weird in a way, a lot of stuff was optimised for slow speed Sp> transmission, 2400bps or so.. so things tended to be fairly small, then Sp> you'd compress it to get it down further... worked point to point out of Sp> a BBS nicely, but then you go and add overhead to get through something Sp> like a 7bit mail server... signs of the times I guess. It was always Sp> the last link... ISP or Uni or whatever it was you connected to, to Sp> home.. the weakest, slowest link. Yeah. People tend to forget that we had to contend with 7 bit systems, fixed records sizes, and all sorts of other weird things that made interoperability hard. Our IBM mainframe running VM/CMS used to eat the ends of lines that were longer than 80 columns when transferring email (which all had to be 7-bit clean, of course, as it would be translated between ASCII and EBCDIC). --- Mystic BBS v1.12 A48 (Linux/64) * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101) .