Newsgroups: comp.arch
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wuarchive!rice!ariel.rice.edu!preston
From: preston@ariel.rice.edu (Preston Briggs)
Subject: Re: Compilers & SPECmarks...
Message-ID: <1991Apr12.224048.24300@rice.edu>
Sender: news@rice.edu (News)
Organization: Rice University, Houston
References: <1991Apr11.143529.17969@odin.corp.sgi.com> <MCCALPIN.91Apr11151903@pereland.cms.udel.edu>
Date: Fri, 12 Apr 91 22:40:48 GMT

mccalpin@perelandra.cms.udel.edu (John D. McCalpin) writes:

>Note that this is not relevant to what is being done with the
>Matrix300 code.  The aggressively optimized code is doing *exactly the
>same* operations as the source, but is doing them in a *significantly
>different* order. 

True.

>My understanding of the state of the art is that
>this is not really very generally do-able --- it works for matrix300
>because Gaussian Elimination of dense matrices is very well
>understood.   My guess is that the compiler would end up doing
>considerably less well on any other piece of code (for which the
>optimum answer is not necessarily known by the compiler writers in
>advance). 

I think you're too pessimistic.
Carr and Kennedy (and others) just know how a lot about nested loops.
I think that's the key, rather than the particular computation.

It may be that the Kuck analyser is recognizing DAXPY explicity
and substituting a call to a hand-code routine.  I'd say that's
cheating the spirit.  But inlining, and doing lots of general loop
optimizations is just nice compiler technology (as in vectorization)
and I'd think we'd like to encourage it's wide adoption.

Preston Briggs
