Subj : Re: Not debugging? To : comp.programming,comp.lang.java.programmer,comp.lang.lisp From : Greg Menke Date : Sun Aug 28 2005 09:30 am "Phlip" writes: > Greg Menke wrote: > > > > Nope- went through breadboard designs of the hardware using inputs & > > outputs first from emulation hardware and eventually from the real > > hardware. The breadboard designs are used to work up hardware and > > software designs and implementations, feeding back changes & bugfixes, > > etc into the evolving specs. Ideally, the working software meets the > > working hardware later on in the project. > > Sorry to interrupt; why is that "ideal"? Does the software match up even > later sometimes? I'm not proposing that it is. But theres not much alternative to something like that when both the hardware and software are being developed. > > Low fidelity emulation is > > pretty handy for easier classes of bugs early on; mismatched or unpacked > > structs, endian stuff, etc.. But race conditions and pathological i/o > > states (the harder problems) often depend on the real hardware in the > > real operating environment (maybe interrupt timing changes because > > capacitors & clocks drift when the system goes below freezing, for > > example). > > As you stated before, in that space you often must research the nature of > faults, and that research will occupy significant development time. And for these hard problem, you have to test with the real thing because any emulator is going to miss some (many) of these subtle issues. For the many easy bugs, an emulator (or maybe simulator) makes a good deal of sense, but its no panacea. > >> Ideally, I would configure one button on my editor to run all tests > >> against > >> the emulator, then run all possible tests against the hardware. If the > >> code > >> fails in the hardware I would trivially attempt to upgrade the emulator > >> to > >> match the fault, so the code also fails with the emulator. But this is > >> speculation, and of course it won't prevent the need to debug against the > >> hardware. > > > > Neat. Who's going to pay for the emulator (build it & prove that it > > matches the hardware- and keep it matching) when we also have to do the > > real system? > > In my experience, many teams are too busy doing "foo" because they did not > carefully plan a development environment that reduces the odds of "foo". I > suspect your team does have a good development environment, so we may be > talking past each other now. How do you reduce the chances of bugs by writing an emulator thats as least as complicated as the real system, and that is essentially guaranteed to have different & unknown bugs which will limit its fidelity at some ill-defined point? Do you really think that teams working on systems like this don't spend a lot of time trying to minimize the opportunities of bugs showing up? There is lots of code-reuse, walkthroughs, reviews, etc... on both the hardware and software- emulators and simulators where feasible; for example, opnet may be used to evaluate task and network schedulability in all sorts of circumstances before the software is even started. As we speak, I'm helping deal with a crash-on-reset problem on the production hardware that doesn't happen on the development hardware, plus weird edac behavior to go along with it. Analyzing this problem requires a top to bottom analysis of the system will all sorts of inter-team documentation of whats going on. How is an emulator going to help here? At best it will exhibit the same problems (but we still don't know what the problems are), at worst it will not duplicate the behavior. It hasn't gotten us any closer in the former case, in the latter its worse because project management is now nervous because its presenting another difference from the real system yet they're (supposed to be) relying on it to help them test. Now if you're talking about a high-level app that can afford to ignore hardware issues, then a low-fidelity emulator doesn't have to address the hard problems and so offers greater coverage. Gregm .