Subj : Re: OO compilers and efficiency To : comp.programming From : Chris Dollin Date : Mon Jul 25 2005 12:13 pm Ed Prochak wrote: > I for one thought that GC was a stupid idea. It makes the OO > programmers lazy. It frees them to worry about matters other than the details of store management, just as it does for Lisp programmers, ML programmers, Pop11 programmers, Prolog programmers, and Algol 68 programmers. (This list is not intended to be exhaustive.) > The cleanup merely requires some protocols that identify when the > object has outlived its usefullness. And behind your "merely" is a tangle of complexity. Those protocols are non-trivial and costly except in some straightforward cases. There are exactly the sort of thing for which machine assistance is desirable. > In embedded environments you > usually know fairly precisely when you are done with something and can > invoke its cleanup (destructor) methods. I don't see much reason why > this isn't possible to know in other cases. All I can say is - try *writing* such other cases. And then compare the results with what you get in a GCd language. > The "snail trail" is in the objects already. when I'm done with an > object I invoke its cleanup (destructor) method which in turn invokes > the cleanup methods for each child object that it allocated > (constructed), which in turn invokes . . . methods down to the > elemental/terminal objects. Well, if you do /that/, either your code is broken or you have a single-reference assumption which I'm not preparared - or able - to make in my code. > When OO was first proposed, I sincerely believed it would help resolve > some of the memory leak problems our C applications had at the time. > Instead, OO applications (especially of the C++ flavor) seem to have > even greater waste and abuse of memory. So much was promised, and so > little delivered. I note that this is in a non-GCed environment, yes? -- Chris "electric hedgehog" Dollin predicting self-predictors' predictions is predictably unpredictable. .