[HN Gopher] How fuzz testing was invented (2008)
___________________________________________________________________
How fuzz testing was invented (2008)
Author : fanf2
Score : 44 points
Date : 2024-03-28 09:12 UTC (1 days ago)
(HTM) web link (pages.cs.wisc.edu)
(TXT) w3m dump (pages.cs.wisc.edu)
| 082349872349872 wrote:
| APL has a primitive for generating random sequences and I'm
| pretty sure I've seen toy examples of it being used to
| characterise functions; the story here may have been the origin
| of "fuzz testing" under that name but it would not surprise me at
| all to find 1960s computer use (or 1950s cybernetic use, or even
| 1930s radio use?) under a different name.
| drewcoo wrote:
| I know I was interested in random noise inputs and test oracles
| that could classify them into known equivalence classes for
| inputs/outputs as a way to verify equivalence class boundaries.
| That was in the 90s. But it's an example of similar stuff
| that's not "fuzz testing," per se.
| pfdietz wrote:
| Fuzz testing feels to me like another instance of The Bitter
| Lesson. Use computation instead of manual effort.
| nickpsecurity wrote:
| Well, there's ups and downs. In theory, we should be able to
| automatically generate tests for all of our software. There are
| methods that include path-based, combinatorial, symbolic,
| model-driven, etc. While some options are generic (eg range
| test on primitive types), most tests need a good understanding
| of what is correct vs incorrect behavior.
|
| That brings us to the other problem: software specification. We
| can't even know if software is correct if we don't precisely
| define what correct means. Same with "secure." So, we need
| specifications of each property we want to check. Then, we can
| use a variety of methods, including testing, to check those.
| Developers usually don't specify the correctness conditions.
| The tools for doing so aren't great for developers either.
|
| Enter fuzzing. It can do something similar to path-based and
| combinatorial testing with no user guidance. Many failures
| cause crashes or other obviously bad behavior. That gets value
| on software without specifications.
|
| Even if we specify things, we might not get all the conditions
| right or every module covered. Maybe a developer updates code
| without updating the specification. While model-based testing
| is good due to human effort, fuzzing will catch what they
| missed by not depending on human efforts.
|
| So, there's better methods to test software from both an
| efficiency and accuracy standpoint. Yet, the labor involved
| plus room for human error makes fuzzing a valuable tool in the
| toolbox. There's no shame in using it.
| CamperBob2 wrote:
| Isn't the Bitter Lesson more like, "Whatever random thing you
| try will probably outperform every logical approach, if you
| just do it enough?"
|
| More concisely, I suppose it could be expressed as, "Nothing
| really matters except choosing a good fitness function."
| nickpsecurity wrote:
| I thought fuzzing was invented by moths visiting Grace Hopper's
| lab. The moths really put their backs into the work.
| jkaptur wrote:
| I believe it was invented by Margaret Hamilton and a research
| assistant during the Apollo project:
|
| "Often in the evening or at weekends I would bring my young
| daughter, Lauren, into work with me. One day, she was with me
| when I was doing a simulation of a mission to the moon. She
| liked to imitate me - playing astronaut. She started hitting
| keys and all of a sudden, the simulation started. Then she
| pressed other keys and the simulation crashed. She had selected
| a program which was supposed to be run prior to launch - when
| she was already 'on the way' to the moon. The computer had so
| little space, it had wiped the navigation data taking her to
| the moon. I thought: my God - this could inadvertently happen
| in a real mission. I suggested a program change to prevent a
| prelaunch program being selected during flight. But the higher-
| ups at MIT and Nasa said the astronauts were too well trained
| to make such a mistake. Midcourse on the very next mission -
| Apollo 8 - one of the astronauts on board accidentally did
| exactly what Lauren had done. The Lauren bug! It created much
| havoc and required the mission to be reconfigured. After that,
| they let me put the program change in, all right."
|
| https://www.theguardian.com/technology/2019/jul/13/margaret-...
| nickpsecurity wrote:
| It was a joke. I didn't know that about Hamilton, though.
|
| In case her work interests you, I did get their book on
| Higher Order Software where she applied everything they
| learned. The method was like executable specifications with
| code generators. The specs were reminiscient of Prolog and
| CSP mixed together. Later, it became USL in the 001 Toolkit.
| It and her other papers are on htius.com.
|
| I didn't think HOS/USL was practical. Her team's early work
| was really impressive, though. I also still respect everyone
| that tried to achieve the hard goal of formally-specified,
| bug-proof software. Each attempt teaches us lessons.
| r0s wrote:
| Fuzzing is guesswork. I see this attitude: because an unknown bug
| could possibly exist, therefore it could be a high severity bug
| and is then worthy of expending a lot of time and effort to
| discover.
|
| To beat all odds and discover something you can't predict, you
| don't even know what it could be. Some effort should be done to
| reduce the problem space.
|
| In the given example, I don't see why you need to test input at
| the cli level when access control and input sanitation should be
| verified already using known parameters that reject all
| unpredictable input. Obviously, certain combinations of input are
| more dangerous than others, and at the very least, those
| individual systems should have focused parameterized tests, and
| that set reduced from the random fuzz possibilities.
|
| Exploratory testing on highly secure, safety prioritized systems
| is one thing. Sure, chaotic testing like this has a place, in a
| very specific, hopefully highly structured system. Even then I
| would use it only after every other type of testing.
|
| When someone wants to test every input possibility with random
| noise I roll my eyes. Test what you know is a threat first,
| achieve solid coverage, run tests at every stage of development
| and then maybe we can talk about fuzzing. Is the system actually
| functioning as it's intended? Are all the happy path use cases
| tested? Are you sure about that? Boring, I know.
| fanf2 wrote:
| Fuzz testing is incredibly effective at finding gaps in the
| programmer's understanding. You should read Barton Miller's
| papers on fuzz testing https://pages.cs.wisc.edu/~bart/fuzz/ to
| see how effective dumb fuzzing still is over 30 years later.
___________________________________________________________________
(page generated 2024-03-29 23:00 UTC)