Newsgroups: comp.lang.c++
Path: utzoo!utgpu!craig
From: craig@gpu.utcs.utoronto.ca (Craig Hubley)
Subject: Re: Static typing and OOP efficiency
Message-ID: <1991Mar4.082637.24821@gpu.utcs.utoronto.ca>
Organization: Craig Hubley & Associates
References: <27C523A2.2155@tct.uucp> <PCG.91Feb27190941@odin.cs.aber.ac.uk> <66645@brunix.UUCP> <27CE9CDC.4FD2@tct.uucp>
Date: Mon, 4 Mar 1991 08:26:37 GMT

In article <27CE9CDC.4FD2@tct.uucp> chip@tct.uucp (Chip Salzenberg) writes:
>I suspect that converted C programmers like me tend to non-virtuals by
>default, while converted Objective C and Smalltalk programmers tend to
>virtuals by default.

I suspect Chip is right.  Having learned C first, then LOOPS, then C++
(I learned Smalltalk in between but never did anything big with it), I 
have swung both ways.  Initially I thought of virtual functions as just
a weak sop to polymorphism but then I realized that all of C++'s means of
"ignoring the type" (conversion, overloading, virtual functions, and now
templates), taken together, are really quite powerful when used in 
combination.  Unfortunately the language rules are not set up to allow
them to be easily combined.

As others have already pointed out in this thread, this is totally unnecessary
and *potential* dynamic binding need not mean *actual* dynamic binding, but is
a necessary door to leave open, letting an optimizing compiler slam it shut
in a later phase...

A parallel exists with distributed processing.  In the Linda shared memory
model, processes exchange data in an abstract shared memory, a "tuple space".
Naive implementations have only a runtime tuple space, accessible to all
processes, which of course is built on some mechanism like sockets or pipes, 
and is slow.  However, the exact same notation can be preprocessed down into 
physical shared memory, memory with semaphores, two-way connections between
processes, or whatever the actual data exchange may require.  There are
several preprocessors that do this effectively, including commercial ones.
In time this model (or another like it) may allow one source to be compiled 
into modules with arbitrary distribution.  At runtime the appropriate 
distribution could be selected (i.e. "plug in the wall" computing").

However, development of "plug and play" libraries seems to be retarded at 
present by the rapid adoption of C++ by C programmers who are using it to
continue their bad habits.  Even if all of these programmers could be
taught how to construct object-oriented architectures in C++, the syntax
would still make it rather clumsy.  It is probably too late to make simple
things simple, but some complex things might still be made possible.

When I teach C++, I usually encourage programmers to break those C habits
immediately by avoiding free functions, public members (especially data
members), non-virtual functions, and even to prefer references over pointers.
Multiple inheritance of abstract classes is encouraged, but inheritance for
aggregation only (building up the implementation) is discouraged.  This avoids
most of the hassle of protected members, complex flow control, virtual base
classes, and private inheritance, which is more a shorthand for including a
private data member than a form of inheritance.

In addition to the very real reusability benefits, this helps break those old
C habits right away.  It works pretty well.  But it wasn't easy for me, and
it ain't easy for them.

IMHO being a C expert is a disadvantage to learning to design in C++, 
although a working knowledge of C obviously helps initially.  It was
definitely easier for me to have spent a year with LISP & LOOPS before coming
back to learn C++.  Of course there are many attitudes about what is "good" 
C++ design, but clearly any good design must make good use of the features
C++ adds to C, and most of those are there to support object-oriented
programming and reusable libraries.

I would suspect that, for most firms, it is not worthwhile economically to
retrain all of their programmers, abandon an already-ANSI-standard language,
buy new development software and wait around for decent CASE tools and 
libraries, if all they want is a "better C".  Reusability is what sells C++
even if, right now, it can only half-deliver.  If it fails in this promise,
and compilers for the other O-O languages become better optimized, and more
libraries become available because they are easier to write, C++ may well
be superseded, and be remembered only as a stepping-stone, like Pascal or
PL/1 or Simula.  Or B.  It would be nice if we could avoid that extra step.

-- 
  Craig Hubley   "...get rid of a man as soon as he thinks himself an expert."
  Craig Hubley & Associates------------------------------------Henry Ford Sr.
  craig@gpu.utcs.Utoronto.CA   UUNET!utai!utgpu!craig   craig@utorgpu.BITNET
  craig@gpu.utcs.toronto.EDU   {allegra,bnr-vpa,decvax}!utcsri!utgpu!craig
