Newsgroups: comp.os.os2.programmer
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!van-bc!ubc-cs!unixg.ubc.ca!cheddar.ucs.ubc.ca!ballard
From: ballard@cheddar.ucs.ubc.ca (Alan Ballard)
Subject: Re: Reading more than 64K
Message-ID: <1991Apr23.184653.1953@unixg.ubc.ca>
Sender: news@unixg.ubc.ca (Usenet News Maintenance)
Nntp-Posting-Host: cheddar.ucs.ubc.ca
Organization: Computing Services, University of British Columbia
References: <SO-ECS.91Apr23115117@stekt1.oulu.fi>
Distribution: comp.os.os2.programmer
Date: Tue, 23 Apr 1991 18:46:53 GMT

In article <SO-ECS.91Apr23115117@stekt1.oulu.fi> so-ecs@stekt1.oulu.fi (Juha Ruotsalainen ti) writes:
>How can my program read a file bigger than 64k into memory. 
> ...
>Is there a problem with DosAllocHuge....
DosAllocHuge allocates a set of consecutive segment descriptors.  With
appropriate pointer operations, it is possible to treat the result as 
one big array.  But underneath, it really is a set of separate 64K segments.
DosRead etc. will never go beyond the end of the segment you pass in as a
parameter... they have no way of knowing it is OK to go on and
use the next segment descriptor.  
 
I don't think there is any way to make DosRead read a full 64K in one
operation, so I guess it will take two reads for each segment. e.g., 
read 0xffff to the base of one segment, 1 byte to the end of the segment,
then 0xffff to the base of the next.  
Probably an easier way would be to just read in a series of 32K chunks
and bump the segment pointer every second time. 
 


Alan Ballard                   | Internet: ballard@ucs.ubc.ca
University Computing Services  |   Bitnet: USERAB1@UBCMTSG
University of British Columbia |    Phone: 604-822-3074
Vancouver B.C. Canada V6R 1Z2  |      Fax: 604-822-5116
