Newsgroups: news.software.b
Path: utzoo!utgpu!cunews!micor!latour!mcr
From: mcr@Sandelman.OCUnix.On.Ca (Michael Richardson)
Subject: Re: More about C News and barfing.
Message-ID: <1991Feb10.211306.3800@Sandelman.OCUnix.On.Ca>
Organization: Sandelman Software Works, Debugging Department, Ottawa, ON
References: <1991Feb7.023059.3082@Sandelman.OCUnix.On.Ca> <1991Feb8.210554.22633@engin.umich.edu>
Date: Sun, 10 Feb 91 21:13:06 GMT

In article <1991Feb8.210554.22633@engin.umich.edu> stealth@caen.engin.umich.edu (Mike Pelletier) writes:
>In article <1991Feb7.023059.3082@Sandelman.OCUnix.On.Ca>
>With the mention of uncompression failure, my first thought is to check
>to make sure the downstream site is using the same number of bits as the
>upstream site in the compression algorithm.

  Well, unless SCO really looses in the neuron count, (they aren't _that_
bad) they shouldn't have screwed up compress. My compress
man page says:

     The  bits
     parameter specified during compression is encoded within the
     compressed file, along with a magic number  to  ensure  that
     neither  decompression  of  random data nor recompression of
     compressed data is subsequently allowed.


>IE, perhaps your downfeed is trying to uncompress the batches using 16 bits
>instead of 12 bits.  What is "a significant number"?
 
  I'm not sure. Given that a single bad compress trashes the system
(fills the disk) it would be rather hard to get a nice percentage... 

  



-- 
   :!mcr!:            |  The postmaster never | - Pay attention only
   Michael Richardson |    resolves twice.    | to _MY_ opinions. -  
 HOME: mcr@sandelman.ocunix.on.ca +   Small Ottawa nodes contact me
 Bell: (613) 237-5629             +    about joining ocunix.on.ca!
