Subj : Re: Threading in JavaScript To : comp.lang.javascript,netscape.public.mozilla.jseng From : Stephen Date : Mon Feb 03 2003 03:14 am Gordan wrote: > Stephen wrote: > > snip some... >>>I am already aware of the XMLHTTP method for downloading things. >>>Unfortunately, it only allows the download of one file at a time, which >>>is why I was looking for a different solution that allows me to download >>>multiple files simultaneously. >>> >> >>Will these (possibly as many as 100) files come from the same server or >>from multiple different servers? Do you have control over all/any of >>these servers' configs? > > > No, no control over any server should be assumed. In theory, it could be any > internet URL. > > >>How do you envision "download[ing] multiple files simultaneously" >>happening across TCP/IP using HTTP? What networking issues have you >>considered in this context? > > > I'm not sure what you mean by this. Well, this may depend on whether your concept is to obtain these data files from a single server or from a multitude of different servers. But ... I am thinking that once the requests go out the wire you're at the mercy of the network. It could be that all the fancy multi-threading in the world (assuming this were possible) would come to naught because of network considerations: --overhead for multiple tcp connections rather than one persistent connection --timeout limits placed by server configs on the time allowed for any single persistent connection --limitations server configs might place on the number of simultaneous connections allowed from a single client (thus my question above about whether you control the servers and thus the servers' configs) --network congestion that could increase latency because of a large number of requests/responses --especially responses coming back to the single point of origin --user's bandwidth it could be these and related issues would completely negate any advantage you obtain (or believe you might obtain) by doing "simultaneous downloads" So ... do you have a vision of what happens on the network and on the server if, say, 100 requests from the same client hit the server roughly simultaneously? Simply meaning, if the network considerations of a large number of "simultaneous" downloads might overwhelm any advantage, then the cost outweighs benefit, and we've probably used up more discussion time than the issue is worth. But that's an empirical determination yet to be made... What I would like to do is fork a > separate connection to each server and go a GET request. Is this really necessary? I understand the browser to work something like the following: A stream of text arrives at the browser. This stream of text is an HTML page. The browser parses this incoming stream. During this parse it may encounter URLs that cause it to issue additional requests for those resources. Some of these might be or