Newsgroups: comp.arch
Path: utzoo!henry
From: henry@zoo.toronto.edu (Henry Spencer)
Subject: Re: Compiler Costs
Message-ID: <1990Jul13.173331.22918@zoo.toronto.edu>
Organization: U of Toronto Zoology
References: <40052@mips.mips.COM> <628@dg.dg.com> <64045@sgi.sgi.com>
Date: Fri, 13 Jul 90 17:33:31 GMT

In article <64045@sgi.sgi.com> karsh@trifolium.sgi.com (Bruce Karsh) writes:
>>Many "large" programs spend most of the CPU time in one or a few relatively
>>"tiny" routines.
>
>This is an often stated.  Is there any evidence for this?  What does the
>distribution of time look like for, for instance, a compiler?  I'd expect
>that there would be lots of time-sinks...

My personal feeling is that the "hot spot" principle has become accepted
dogma on the basis of rather few actual cases, the most infamous being
the XPL compiler that spent all of its time skipping trailing blanks in
card images.  I suspect that a lot of hot-spot examples are the result
of something like this:  stupid code, or a stupid data format.

Our experience with C News was that there never was a single massive hot
spot.  Typically the profile would show half a dozen major contributors
to execution time and then a long tail of smaller ones.  Often, the
way to fix it was *not* to optimize the profile leaders, but to look at
where they were being called from and why, and streamline the algorithms
in the callers.
-- 
NFS:  all the nice semantics of MSDOS, | Henry Spencer at U of Toronto Zoology
and its performance and security too.  |  henry@zoo.toronto.edu   utzoo!henry
