Subj : Re: Change Patterns (was: Polymorphism sucks) To : comp.programming,comp.object From : rem642b Date : Wed Aug 10 2005 09:08 pm > From: Programmer Dude > Most professional programmers learned long ago not to code for the > 'here and now'. IMO that position is overstated. Can we agree on: Que sera sera, whatever will be will be, the future's not ours to see. We have to code for the present in order to get anything out the door. But we should code with an idea what future needs might be. But we shouldn't expect the future to turn out the way we expected. > There's really three things here to talk about. Programming with > Objects (encapsulated "things" that bind data and code to manipulate > that data), Anyone who has given a second thought to today's serious OOP languages (Java, C++ for example) knows that code aren't bound in the instance objects, they are bound in the classes. There's a runtime pointer from each instance back to its class, whereby you can do polymorphism, i.e. given an object and signature of method you can call the appropriate method from the **class** method-library. If you claim you meant to say the class objects are the things you were talking about, then what you say is untrue. Most of the time you don't program with the class objects, you program with the instance objects (except when calling static methods, which aren't really methods, they're functions, and calling them is doing procedural not OO programming). So is OK if I re-interpret what you wrote to mean the correct thing instead of the way you worded it? In that case, you skipped a step: There's programming with objects that aren't instances of classes with the behaviour described above. They are just encapulations of a bunch of data in an organized manner, but without a specific link to any class that contains methods for that kind of data. For some kinds of programming, the oldstyle (1964) Lisp style of generic functions that internally dispatch on type of parameter is more appropriate, as has been pointed out in newsgroups recently in the case where the set of data types is relatively static but the set of behaviours keeps changing from day to day. With the oldstyle Lisp approach, each time you add new behaviour, you just need to add one new generic function located in one file and link that file in with your application. If you tried to do that with OOP, you'd suffer *FLAG DAY* when you must add a new method signature to your master INTERFACE and implement that new method in each and every class that implements that interface, and ship all those classes out simultaneously, and hope there's nothing wrong with any of them. (And then of course there's newstyle Common Lisp generic functions, where the particular combination of object-type and behaviour are totally split into separate specifications of a generic function, where you are free to organzed your specifications whatever way you want, such as the OO way with all behaviour common to a single data type together, or the old-fashioned way with all data types implementing a particular behaviour together, or with a patch file containing the very latest additions to either behaviour or object-type and everything else just the way it was yesterday.) > programming with polymorphism (having diverse Objects that can > sometimes be treated identically) This is where OOP or newstyle Common Lisp generic functions shine. > and programming with hierachical Objects. Are you referring to hierarchial data structures, such as nested lists, objects with pointers to other objects, etc.? Or are you referring to inheritance of behaviour from parent classes? Maybe you should include both, as separate items #3 and #4? > The simple fact is that heirarchical organization is common in many > areas of human experience. Genetics, Organizations, Animals and many > forms of data are naturally hierarchical. Um, Genetics are only *approximately* hierarchial. In sexual organisms, two parallel segments of DNA may cross-over during meiosis, so that the resultant segment has dual inheritance (two parents). In prokaryotes, DNA may be transferred from one cell to an unrelated cell, and that newly transferred DNA permanently copied to the recipient's genome, so the after-the-event cell now has two parents (the donor cell, and the before-the-event self). Emulation by strict hierarchy breaks fatally at the slightest deviation from 100% hierarchy, whereas emulation by other mechanisms can handle the 99.9% hierarchy just fine with only slight extra overhead and have no problem handling the 0.1% violation of hierarchy. (Are you interested in my proposal to track genome pedigree on a per-base-in-context basis instead of per-gene or per-cell or per-organism basis, whereby each base has a unique parent-in-context even at the points in space-time where a cross-over or insertion or deletion happened immediately between the given base and one of its immediate neighbors, the only exception being if a point insertion happened at the same point a splice occurred, which might be never?) Animals of the commonly known types are mostly sexual, which means *every* animal of such types has two parents, totally breaking the hierarchial idea from the get-go. > OOD is harder, and does require a higher skill set, than simpler > forms of programming. It very much requires you to fully understand > what you're doing, and it very much requires that you do the upfront > analysis and design that WE'VE BEEN SAYING FOR DECADES SHOULD BE DONE > ON ALL PROGRAMMING PROJECTS. Your expectation that at the start of any programming project we already know exactly what we will end up with, is unrealistic. Apparently you've never done any R&D, where you explore various ways of solving a problem, doing experiments to see what methods work and which were only nice-seeming ideas, until finally you have something that does not only what you originally conceived but a lot of other useful things too that you didn't envision at the start of the project. There is a difference between your extreme of cut-and-dried implementation of pre-planned algorithms, and the scientific research extreme of not even knowing if success is possible at the start much less how exactly the eventual success will be organized. Somewhere in the middle is R&D where you are pretty sure you can succeed, but you aren't sure exactly how, so you try things until you get a working system, then you go back and refactor it to make more "clean" so it'll be easier to maintain it in the future. So you never ever needed to write a first cut then refactor it later? All your projects were so easy you could plan the whole organization from the start and never change one thing in your organizational chart?? I guess I've worked on much more difficult projects than you have. I've had to do real software R&D many times. Here's a good mantra for all software people: REFACTOR By the way, what do you-all think of this for beginning programming lessons: http://www.rawbw.com/~rem/HelloPlus/hellos.html#s4outl Should I add one more chapter on OOP, basically refactoring the organization of object types and/or behaviour types in cases where that would be useful, but knowing when it wouldn't be useful so you don't waste effort converting program to OOP without any gain in value? > It's just that you are slightly more likely to get away--initially > maybe-- with not doing that with, say, procedural programming. It can > be fatal more quickly in OOD. I think this is a symptom of what's wrong with the "straightjacket" of strict OOP as in Java: For many kinds of R&D, it's just too constraining to have to use class hierarchies during the early exploratory phases, and have horrible problems later when you realize you made a mistake and need to totally refactor your classes. Better to start with procedural programming, and convert to OOP only at the point where you are sure how the class hierarchy should turn out, maybe only convert individual parts to OOP at the point in time when you are sure of the correct class hierarchy for that particular part of the program, indefinitely leaving as procedural the rest of your R&D playpen. .