Subj : Re: Not debugging? To : comp.programming,comp.lang.java.programmer,comp.lang.lisp From : Russell Shaw Date : Thu Aug 25 2005 03:05 pm Phlip wrote: > Hartmann Schaffer wrote: > >>working with an IDE debugger beats analysing a binary (i.e. hex or >>octal) core dump > > The goal is decreasing the odds you need to do either. > > One way to achieve that goal is to learn debugging and add it to your tool > set. > > CBFalconer wrote: > >>I can't really remember when I last used a debugger. Judicious >>printf statements, or the equivalent, have handled everything for >>me for years. > > You think too much. I don't have that problem. > > Patricia Shanahan: > >>The key issue for me is the round trip time to make a change in the >>debug output, recompile, and run to the point of failure. If that takes >>a few minutes, I have no problem using printouts. > > Hence debuggers that recompile their code on the fly as you change it. > > These debuggers support designs that are all coupled together. (For example: > Write an ActiveX control and run it from a server-side ASP.NET page. You > must warm up an entire web server and all its pages just to debug this > control in-situ. MS coupled everything together as part of vendor-lockin.) > > >>On the other hand, I have been faced with problems in unfamiliar >>programs that took several hours from starting the run to first symptoms >>of the problem. Once I was at a failure point, I wanted to squeeze every >>scrap of data I could. >> >>An interactive debugger allows you to ask questions you didn't know you >>wanted to ask until you saw the answer to another question. For example, >>you can see which variable is incorrect at the failure point, look at >>the source code to find the variables that affect it, and immediately >>probe their values. > > Legacy situations suck. We achieve debugger-free development typically in > new code that's fully decoupled and fully designed for testing. > > Like I told Russell Shaw, if you can think of the next line of code to type, > you must perforce be able to think of a test case that will fail if that > line is not there. > > Someone invented these legacy code situations by debugging down to the place > where the next lines go, adding the lines, and evaluating them on the fly. > Or the equivalent, to within the limits of their muscle memory. This > technique tends to couple everything together. > > If you force all code to come through test cases, the test cases must be > able to access their tested lines without intervening layers of cruft. So > Design By Testing is a powerful way to decouple all the modules apart. > > If they have any unintended features, after characterizing them (possibly > via debugging), you can very rapidly create new test cases, re-using similar > test cases, and these can isolate the bug before you kill it. > > (How many times in legacy code have we given up debugging and just poked > around, hoping to change something so the bug symptoms go away?!) The tedious stuff i've done would never have been started if i'd tried to know the exact end result so that i could work back to a lower level, and write tests for what i wanted. It's more a case of poking in the dark, having a rough overall idea of how the whole thing should work, and writing and debugging a rough framework that gets more things added to it during a lot of chopping and changing along the way. It's more like evolutionary code that grows from a very rough and hard to define shape. That said, i also design the overall thing in a way that is modular and easy to test. Then you could write unit tests for the interfaces between modules. I'd never write intra-module tests, because that would take 3-5x the resources taken to write the original code for me. For something like a GUI program, i make the GUI generate commands that the backend then interprets, so that it can also be script driven and have automated test scripts. .