[HN Gopher] My First Production Bug
       ___________________________________________________________________
        
       My First Production Bug
        
       Author : niborgen
       Score  : 32 points
       Date   : 2024-06-11 08:22 UTC (1 days ago)
        
 (HTM) web link (robinverschueren.com)
 (TXT) w3m dump (robinverschueren.com)
        
       | inglor_cz wrote:
       | "Avoid #defines for constants like the plague. Use static
       | constexpr instead."
       | 
       | Pretty much every C textbook I can think of now is full of
       | #defines for constants. I, too, am (de)formed by this.
        
         | niborgen wrote:
         | Thanks for your comment.
         | 
         | The blog post was written with C++ in mind, but I just learned
         | that C now also has constexpr:
         | https://en.cppreference.com/w/c/language/constexpr.
         | 
         | Time to update those books (:
        
         | sltkr wrote:
         | Because `constexpr` was not supported in C until very recently
         | (C23).
         | 
         | And `static const` by itself is not a good alternative, because
         | constants can't be used in a lot of places, for example when
         | declaring the size of an array.
        
           | nj5rq wrote:
           | Constants can be used for array sizes since a lot of C
           | standards ago, right?
        
       | egl2021 wrote:
       | I'm working on a code base with lots of #define. I replace them
       | when I'm doing maintenance nearby, but you can't change them
       | blindly: some bit of code may be inadvertently depending on the
       | 0-artifact or something similar. "Science progresses one funeral
       | at a time."
        
         | niborgen wrote:
         | Totally. Luckily, these #defines are often easy to grep for.
        
       | sltkr wrote:
       | Although using constexpr over #define wherever possible is good
       | advice (and similarly with functions over macros), you can also
       | argue that the culprit is really this snippet:
       | float ts = 1 / FRAMERATE;
       | 
       | This blindly assumes that FRAMERATE is a floating-point constant,
       | but there is no reason to assume it is; in theory, other parts of
       | the code could depend on FRAMERATE being integral. The code
       | should be written in a way that ensures conversion to float
       | happens before division, for example:                 float ts =
       | 1.0f / FRAMERATE;
        
         | tialaramex wrote:
         | In a better language this mistake can't happen anyway. One of
         | the things which too often gets missed when explaining why C++
         | is unsafe is that it's full of these unnecessary footguns,
         | there is no need to be able to get this wrong.
         | 
         | C++ gets this wrong for maximum drop-in compatibility with C,
         | the classic "New Jersey Style" programming language where
         | simplicity of implementation is prized over simplicity of use
         | or correctness.
        
           | prerok wrote:
           | Define "better language". The duck typed languages all suffer
           | the same problem.
           | 
           | Even in Go we had a stupid problem where default json
           | deserializer creates floats (when deserialized into any) and
           | the number was high enough int64 where it lost precision.
           | 
           | I mean, we can go at it all night long what pitfalls await in
           | what language. Perhaps Rust is safest with its own pitfalls
           | where you just can't do it safely (looking at you BST and use
           | of Arc).
           | 
           | Programming is full of such traps and only inexperienced
           | engineers in a language would make such a mistake. This
           | includes engineers with 20+ years of 1 year experience.
        
             | jshier wrote:
             | In this case, a language that doesn't support automatic
             | numeric type conversions. For instance, in Swift, 1 /
             | FRAMERATE would give you integer division if FRAMERATE was
             | an Int, or Double/Float division if FRAMERATE was a float,
             | as the 1 literal would be inferred to a compatible type for
             | / if one existed. You would never see Int / Float, or an
             | implicit conversion between numeric types.
        
               | niborgen wrote:
               | Never worked with Swift before, but this checks out:
               | https://swiftfiddle.com/mt5uptmbynbt7bi4jot7m5laum
               | 
               | Careful though, if you don't have :Float on ts it still
               | gives 0.
        
               | prerok wrote:
               | I don't understand the point of both replies... my point
               | was that integer based division is preferred in all
               | languages and the floating point must be somehow "forced"
               | that one operand is float.
               | 
               | The post I was replying to mentions that this sort of
               | problem does not exist in "better languages" but my point
               | was that it does.
               | 
               | The problem here is that the code assigns an integer
               | (result of division) to a float which is implicit
               | "upgrade" in languages like C and C++ and required to be
               | explicit in newer languages like Golang and Rust.
               | 
               | My point, which I seem to have failed to make, is that
               | careless programmers would (and do!) make a silly thing
               | like:
               | 
               | v := float64(operand1 / operand2)
               | 
               | just to satisfy the compiler error.
               | 
               | A common mistake for junior programmers but unforgivable
               | (for some interpretations of unforgivable :) ) one for a
               | senior.
        
               | tialaramex wrote:
               | > integer based division is preferred in all languages
               | 
               | What does this even mean?
               | 
               | First of all, several languages have distinct "integer
               | divison" and "floating point division" operators, so
               | there's no sense in which integer is "preferred" in those
               | languages, they're unrelated operations.
               | 
               | Even allowing for your ignorance of such languages _many_
               | modern languages do not have untyped constants, they 're
               | an attractive nuisance. If you don't have untyped
               | constants then even if you're relying on implicit typing
               | for constants (which I also don't like) you trip a
               | mistake in the original expression anyway which is now
               | unalike.
               | 
               | This mistake only occurs in a language with all of:
               | 
               | 1. A single division operator despite two distinct
               | operations
               | 
               | 2. Untyped constants
               | 
               | 3. "Promotion" so that type mismatches just do something
               | unexpected and compile anyway.
        
               | jshier wrote:
               | Or just use 1.0, which can only be a Float literal.
        
           | prewett wrote:
           | But the "better" languages have their own problems. UI
           | calculations tend to need to mix int/float frequently, and
           | the calculations can get fairly unreadable from all the
           | casting (and I'm the sort that religiously casts to floats in
           | C++ because I get burned on this). Swift, in particular, was
           | particularly bad about this (Swift 5; I can't remember the
           | details). Then there are things like `total / n` when
           | calculating an average. Obviously you want `n` to be an int,
           | because incrementing a float is a bad idea, but now you need
           | to cast `n`, even though `total` is already a float, because
           | they don't match.
           | 
           | I tend to prefer the casting, because I had spending a bunch
           | of time debugging 0s and NaNs, but sometimes it makes things
           | look unnecessarily ugly and hard to read.
        
           | tuveson wrote:
           | I agree that implicitly promoting integers to floats (or
           | implicitly changing any type) is a bad idea, but I don't
           | think the C++ is uniquely bad in this regard. Java[1] and
           | C#[2] both do this too.
           | 
           | [1] https://godbolt.org/z/vE9n14a54 [2]
           | https://godbolt.org/z/aPa3P1Grc
        
             | tialaramex wrote:
             | Right, certainly I don't want to single C++ out for this -
             | I mentioned it because (a) the article we're talking about
             | is describing a mistake the author found in some C++ code,
             | and (b) I wanted to head off the inevitable call to blame
             | C, yes, C++ does it because C does it (and that's probably
             | why Java and C# do it too) but that's a _choice_ and it 's
             | a bad choice.
             | 
             | Implicit conversion is always a mistake. The price in terms
             | of reduced clarity and extra mistakes is too high for the
             | marginal convenience of less typing.
             | 
             | I feel the same for boolean coercion even, which I know is
             | more controversial than some of the really stupid C
             | promotions, I do not believe in "truthiness". There's only
             | one false, it's the constant false, it's not 0 or "" or 0.0
             | or an empty array or a null pointer or a billion other
             | things, it's just itself and nothing else.
        
               | tuveson wrote:
               | Yeah I totally agree, it's a bad design choice and the
               | successors to C should not have copied it.
        
           | nj5rq wrote:
           | > why C++ is unsafe is that it's full of these unnecessary
           | footguns
           | 
           | I don't see the "footgun". C and C++ allow you to perform
           | divisions by integers. If you don't specify the decimals, it
           | understands that they are integers. It's how it's meant to
           | be. In my opinion, calling that a "footgun" is like saying
           | using single quotes for characters is a "footgun" because
           | someone could interpret them as strings. That's just not
           | understanding the language.
        
             | daymanstep wrote:
             | "just read the docs" is a fully general counter argument
             | that can be used to justify arbitrary bad design decisions
             | 
             | I think Python 3 did the right thing by having 1/3 equal
             | 0.333 (a float) rather than 0. It's more intuitive for the
             | / operator to always do standard division and when you want
             | integer division then you use the // operator instead. It's
             | more consistent than having / return a completely different
             | result depending on whether one of the operand happens to
             | be 3 instead of 3.0
        
         | niborgen wrote:
         | Author here, totally agree with you. Changing the preprocessor
         | directive was just the way I fixed it way back when.
         | 
         | Of course, the real fix is to use a proper constant.
        
       | nj5rq wrote:
       | The last code block has a typo:                   #define
       | FRAMERATE = 100.0
       | 
       | Should probably be:                   #define FRAMERATE 100.0
        
         | niborgen wrote:
         | Right, sorry! Corrected now.
        
       ___________________________________________________________________
       (page generated 2024-06-12 23:02 UTC)