Newsgroups: comp.lang.c
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sample.eng.ohio-state.edu!mckinley!rob
From: rob@mckinley.eng.ohio-state.edu (Rob Carriere)
Subject: Re: low level optimization
Message-ID: <1991Apr28.184851.22140@ee.eng.ohio-state.edu>
Sender: news@ee.eng.ohio-state.edu
Organization: The Ohio State University Dept of Electrical Engineering
References: <22246@lanl.gov> <1991Apr24.174057.22470@ee.eng.ohio-state.edu> <703@curly.appmag.com>
Date: Sun, 28 Apr 1991 18:48:51 GMT

In article <703@curly.appmag.com> pa@appmag.com (Pierre Asselin) writes:
[about the 2-objects-in-1 scheme of IM optimization]
>But does this sheme really count?  Suppose there are N modules subject
>to interoptimization.  Translating any one of them leads to a
>2^(N-1)-way branch as to what set of optimizations is allowed.
>Hmmm...  or 2^M,, where M is the number of optimization tricks the
>compiler knows about.  Still too big, though.

Well, this is why us academic types invented the thought experiment... :-)
Seriously, the proposed implementation was just that: a proof of concept.  A
real implementation would be a lot smarter about it.  If you build an RCS-like
structure of diffs, I'm pretty sure you can get away low space and time
wastage.  Or you could be lazy and either IM optimize or not at all.  The idea
was, after all, to encourage the programmer to use a compilation scheme that
provides IM optimzation, but still allow compilation in the general case for
standard conformance.  In other words, the executable that results when the
linker goes `oops' can stink as long as it works well enough to conform to the
standard. 

SR
---


