Subj : Re: extreme programming (thoughts) To : comp.programming From : Dan Mills Date : Thu Jul 21 2005 09:49 pm matt@mailinator.com wrote: > in xp, you stick all your programmers into one room together. now, > knowing the personal hygiene of some devs, that alone is reason to be > wary. next, you give them half as many computer as they need -- yep, we > have to share computers, 2 devs to a box. one is the "pilot" the other > is the "navigator". this is called paired programming. Actually pair programming does work for me, but it may not be for everyone. I find that being forced to explain the code to another programmer while I am writing it tends to offset my tendency to write very terse code which really helps the next poor sap to work on it. > what more, you > do not have a fixed desk, nor partner. everyone rotates machines and > partners. better hope youve got your flu shots. oh and at least two > daily "stand up" meetings were everyone is forced to, you guessed it, > stand up for the duration. If you are working in an enviroment where the requirements are shall we say "poorly defined", then unless you have a **VERY** good systems architect these meetings can actually save a lot of wasted time. I am however not much of a fan of the hot desks and frequently changed partners (but the latter does probably reduce problems with poorly specified interfaces as they will tend to be spotted earlier). > that is the physical arrangement. there is more to xp than that. > > the core idea is nice: fast, flexible design & development. during the > stand up meetings you have the devs, designers, and business owners all > in the room together for hashing out requirements details. things can & > do change, but the cost of development time reflects that, which keeps > the biz owners honest. cool. Sounds like a major improvement on some projects I have worked on! > but it starts to get stupid, fast. rather than estimate in hours or > days, things are qualfied in "nuts". nebulous units of time (NUTs). > each nut is worth a certain number of days, but that figure changes. i > think its designed to prevent the biz owners from having quantatative > expectations, instead "it's all relative, baby". Probably reasonable when discussing implementation tradeoffs within the team (in fact big O notation is the same idea in a related field) , but I am not sure it is that useful for the interface between the programming team and the client. > next is the code. > > biggest beef: NO COMMENTING ALLOWED! thats right, we're designing > highly complicated financial systems, and i'm not allowed to document > or include helpful comments for any future developers. the idea is, as > explained to me from the lead, is that this forces better code. Someone thought this one out with both feet. A bad comment is worse then none, but commenting on what something does is always useful. > the last major, major difference is unit testing. rather than draw a > design model, program it, add error handling, and send the app to QA, > we do massive amounts of programmatic unit testing. Thats standard in many industries, an outfit I used to work for did full coverage unit testing of anything SIL level 3 or higher (And this was in a more or less standard development model). It is a pain writing those tests, but having them makes future modifications to the system so much easier! If you can write unit tests that also serve as regression tests it can be a major win. Actually, we found that the process of debugging the tests often turned up issues with the specification that would have led to serious problems down the line. > by this i mean, > write code to test your code. explicity and specifically testing every > possible outcome in a futile attempt to write "bulletproof" code. the > layers of programmatic development i am forced to code are mind > boggling. 80% of my time is spent writing test cases and code, writing > methods to insert & remove data for the tests, etc. and this is with > the aid of NUnit and NUnitASP, popular tools for this process. and it's > still this time consuming. 80% of my time for supporting the other 20%. > and you know what? its futile. because in the end, it *still* has to go > to QA, and they *still* find bugs. But I bet most of those bugs are in the specification of the interfaces between units, not in the implementation of the units themselves. These are usually easier and cheaper to fix and having well defined interfaces (enforced by assertions), tends to make it obvious where the problem lies. > -- because when you put complex > systems together, weird things happen. theres no quantifiable & > programmatically way to get around that simple truth. humans are > flawed, thus any system we design will have flaws. EOS. Of course, but if the individual unit and module tests pass, then you have limited the scope for those flaws to a very great extent. Programming is all about making debugging as quick and simple as possible, I have seen expert programmers spend literally weeks looking for what turned out to be a misplaced zero. Having a test harness for the library used in that project (and having the lib crash cleanly when fed a null pointer) would have cut the search space for the problem by a few orders of magnitude. Automated testing tools are good, even when you have to write the bloody things! Sounds like your management are of the dilbertian pointy haired variety and have read a book somewhere (always dangerous in a manager).... There is no silver bullet (But XP does have some good points)! Just my 5p. Regards, Dan. .