Newsgroups: comp.lang.apl
Path: utzoo!utgpu!news-server.csri.toronto.edu!torsqnt!geac!itcyyz!yrloc!hui
From: hui@yrloc.ipsa.reuter.COM (Roger Hui)
Subject: Re: ACORN, and other niceties
Message-ID: <1991Jun13.044828.29504@yrloc.ipsa.reuter.COM>
Reply-To: hui@yrloc.ipsa.reuter.COM (Roger Hui)
Organization: Iverson Software Inc.
References: <1991Jun11.200234.6130@aplcen.apl.jhu.edu>
Date: Thu, 13 Jun 91 04:48:28 GMT

Dave, you related three arguments that your antagonist made.  The last
two are unsubstantiated opinions, and only the first seemed based on facts.
And the facts are wrong.
 
> 0.  "APL is not a good language for doing parallel algorithms."  His reasoning?
> People take huge outer products and then reduce or slice along a subarray,
> and thus the algorithms are inefficient (I tried to explain about
> properly-formed arguments for inner product, and about the RANK operator).
 
These so-called "huge outer products" can take CONSTANT time to
compute on parallel machines.  For example, consider the problem of
computing  x e. y  (x epsilon y)  where x and y are vectors
of length m and n.  One algorithm is
 
 +./ y =/ x   (or/[0] y jot . = x)
 
On a serial machine, this takes time O m*n (and IS inefficient).
On a SIMD machine (like the Connection Machine), a straightforward
translation of the algorithm yields:
 
 Propogate x and y to m*n processors                 O log m>.n
 Compute the m*n = operations of the outer product   O 1        [!]
 Compute the or-reduction                            O log n
 
The total time taken is O log m>.n .  Log time is "speed of light";
this "huge outer product" algorithm is in fact time optimal.

-----------------------------------------------------------------
Roger Hui
Iverson Software Inc., 33 Major Street, Toronto, Ontario  M5S 2K9
(416) 925 6096

