[HN Gopher] Why is Git Autocorrect too fast for Formula One driv...
       ___________________________________________________________________
        
       Why is Git Autocorrect too fast for Formula One drivers?
        
       Author : birdculture
       Score  : 412 points
       Date   : 2025-01-19 19:20 UTC (1 days ago)
        
 (HTM) web link (blog.gitbutler.com)
 (TXT) w3m dump (blog.gitbutler.com)
        
       | 1970-01-01 wrote:
       | Deciseconds?? There's your problem. Always work in seconds when
       | forcing a function for your users.
        
         | UndefinedRef wrote:
         | Maybe he meant dekaseconds? Still weird though..
        
           | TonyTrapp wrote:
           | It reads like the intention was that turning the parameter
           | (0/1) command into an integer parameter, where the previous
           | value enabled = 1 should behave reasonably close to the old
           | behaviour. 1 deciseconds is arguably close enough to instant.
           | If the parameter were measured in seconds, the command would
           | always have to wait a whole second before executing, with no
           | room for smaller delays.
        
             | bot403 wrote:
             | No, smaller delays <1s are also a misdesign here. Have we
             | all forgotten we're reacting to typos? It's an error
             | condition. It's ok that the user feels it and is
             | inconvenienced. They did something wrong.
             | 
             | Do some think that 900ms, or 800, or some other sub-second
             | value is really what we need for this error condition?
             | Instead of, you know, not creating errors?
        
           | schacon wrote:
           | We had this debate internally at GitButler. Deci versus deca
           | (and now deka, which appears to also be a legit spelling). My
           | assumption was that 1 full second may have felt too long, but
           | who really knows.
        
             | a3w wrote:
             | deci is 1/10, deca is 10/1. So decisecond is correct.
        
               | schacon wrote:
               | I understand, I meant I tried to say the word
               | "decisecond" out loud and we debated if that was a real
               | word or if I was attempting to say "deca" which was
               | understandable.
        
               | zxvkhkxvdvbdxz wrote:
               | It's very standardized (SI), meaning 1/10:th. Althought
               | not so commonly used with seconds.
               | 
               | You might be more familiar with decimeters, deciliters,
               | decibels or the base-10 (decimal) numbering system.
        
               | quesera wrote:
               | Also "decimate" which used to mean "kill 1/10th of the
               | soldiers", but now apparently means "destroy (almost)
               | entirely". :)
        
               | CRConrad wrote:
               | Sure, deca- as in "decade" is understandable. But why
               | would deci- as in "decimal" be any less understandable?
        
         | 331c8c71 wrote:
         | Seconds or milliseconds (e.g. if the setting must be integer)
         | would've been fine as they are widely used. Deciseconds,
         | centiseconds - wtf?
        
           | atonse wrote:
           | Falls squarely within the "They were too busy figuring out
           | whether they could do it, to ask whether they SHOULD do it"
        
         | dusted wrote:
         | at least fractions of a second, 250 would already be much more
         | noticble.. 100 is a nice compromise between "can't react" and
         | "have to wait", assuming you're already realizing you probably
         | messed up
        
         | gruez wrote:
         | better yet, encode the units into the variable/config name so
         | people don't have to guess. You wouldn't believe how often I
         | have to guess whether "10" means 10 seconds (sleep(3) in linux)
         | or milliseconds (Sleep in win32).
        
         | synecdoche wrote:
         | This may be something specific to Japan, which is where the
         | maintainer is from. In the Japanese industrial control systems
         | that I've encountered time is typically measured in this unit
         | (100 ms).
        
         | GuB-42 wrote:
         | Deciseconds (100ms) are not a bad unit when dealing with UI
         | _because_ it is about the fastest reaction time. We can 't
         | really feel the difference between 50 ms and 150 ms (both feel
         | instant), but we can definitely feel the difference between 500
         | ms and 1500 ms. Centiseconds are too precise, seconds are not
         | enough. Also, it is also possible that the computer is not
         | precise enough for centiseconds or less, making extra precision
         | a lie.
         | 
         | Deciseconds are just uncommon. But the problem here is that the
         | user didn't expect the "1" to be a unit of time but instead a
         | boolean value. He never wanted a timer in the first place.
         | 
         | By the way, not making the unit of time clear is a pet peeve of
         | mine. The unit is _never_ obvious, seconds and milliseconds are
         | the most common, but you don 't know which one unless you read
         | the docs, and it can be something else.
         | 
         | My preferred way is to specify the unit during the definition
         | (ex: "timeout=1s") with a specific type for durations, second
         | is to have it in the name (ex: "timeoutMs=1000"), documentation
         | comes third (that's the case of git). If not documented in any
         | way, you usually have to resort to trial and error or look deep
         | into the code, as these values tend to be passed around quite a
         | bit before reaching a function that finally makes the unit of
         | time explicit.
        
         | userbinator wrote:
         | My default for times is milliseconds, since that's a common
         | granularity of system timing functions.
        
       | mike-the-mikado wrote:
       | I'd be interested to know if any F1 drivers actually use git.
        
         | schacon wrote:
         | Not sure, but I do personally know two high profile Ruby
         | developers who regularly race in the LMP2 (Le Mans Prototype 2)
         | class - DHH and my fellow GitHub cofounder PJ Hyett, who is now
         | a professional driver, owning and racing for AO
         | (https://aoracing.com/).
         | 
         | I mostly say this because I find it somewhat fun that they have
         | raced _each other_ at Le Mans last year, but also because I've
         | personally seen both of them type Git commands, so I know it's
         | true.
        
           | diggan wrote:
           | Isn't Le Mans more of a "endurance" race though, especially
           | compared to F1? It would be interesting to see the difference
           | in reaction ability between racers from the two, I could see
           | it being different.
        
             | schacon wrote:
             | I feel like in the "racing / git crossover" world, that's
             | pretty close. :)
        
           | pacaro wrote:
           | I've also worked with engineers who have raced LMP. It's
           | largely pay-to-play and this is one of those professions
           | where if you're the right person, in the right place, at the
           | right time, you might be able to afford it.
        
           | xeonmc wrote:
           | Maybe we can pitch to Max Verstappen to use Git to store his
           | sim racing setup configs.
        
       | kittikitti wrote:
       | I sometimes have this realization as I'm pressing enter and
       | reflexively press ctrl+c. As someone whose typing speeds range
       | from 100 to 160 WPM, this makes sense. Pressing keys is much
       | different from Formula One pit stops.
        
         | schacon wrote:
         | I'm curious if the startup time, plus the overhead of Git
         | trying to figure out what you might have meant is significant
         | enough to give you enough time to realize and hit ctrl+c. In
         | testing it quickly, it looks like typing the wrong command and
         | having it spit out the possible matches without running it
         | takes 0.01-0.03s, so I would venture to guess that it's still
         | not enough time between hitting enter and then immediately
         | hitting ctrl-c, but maybe you're very very fast?
        
           | rad_gruchalski wrote:
           | The command is already running, you ctrl+c THE command. But I
           | agree, 100ms is short.
        
           | johnisgood wrote:
           | I think most programs you execute have enough startup
           | overhead to do Ctrl-C before it even begins, including CLI
           | tools. I do this a lot (and calculate in the time of
           | realizing it was the wrong command, or not the flags I
           | wanted, etc.)
        
         | snet0 wrote:
         | That reflexivity felt a bit weird the first time I thought
         | about it. I type the incorrect character, but reflexively
         | notice and backspace it without even becoming aware of it until
         | a moment later. I thought it'd be related to seeing an
         | unexpected character appearing on the display, but I do it just
         | as quickly and reflexively with my eyes closed.
         | 
         | That being said, there are obviously cases where you mistype
         | (usually a fat-finger or something, where you don't physically
         | recognise that you've pressed multiple keys) and don't
         | appreciate it until you visually notice it or the application
         | doesn't do what you expected. 100ms to react to an unexpected
         | stimulus like that is obviously not useful.
        
           | grogenaut wrote:
           | I type a lot while looking away from the monitors, helps me
           | think / avoid the stimulus of the text on the screen. I can
           | tell when I fat finger. It also pissed off the boomers at the
           | bar who thought I was showing off as I was a) typing faster
           | then they could, and b) not looking at the screen, c)
           | sometimes looking blankly past them (I'm really not looking
           | when I do this sometimes).
           | 
           | also I typed this entire thing that way without looking at it
           | other than for red squiggles.
        
         | otherme123 wrote:
         | Not about pit stops. They talk about pro drivers with highly
         | trained reflexes, looking at a red light knowing that it will
         | turn green in the next 3 seconds, so they must push a pedal to
         | the metal as fast as they can. If they react in less than 120ms
         | is considered a jump start.
         | 
         | As for 100WPM, which is a very respectable typing speed, it
         | translates to 500 CPM, less than 10 characters per second, and
         | thus slightly above 100ms per keypress. But Ctrl+C are two key
         | presses: reacting to type them both in under 100 ms is
         | equivalent to a writting speed above 200WPM.
         | 
         | Even the fastest pro-gamers struggle to go faster than 500
         | actions (keypresses) per minute (and they use tweaks on repeat
         | rates to get there), still more than 100ms for two key presses.
        
           | Aerroon wrote:
           | > _But Ctrl+C are two key presses: reacting to type them both
           | in under 100 ms is equivalent to a writting speed above
           | 200WPM._
           | 
           | I think people don't really type/press buttons at a constant
           | speed. Instead we do combos. You do a quick one-two punch
           | because that's what you're used to ("you've practiced"). You
           | do it much faster than that 100ms, but after that you get a
           | bit of a delay before you start the next combo.
        
             | otherme123 wrote:
             | As menctioned, pro-gamers train combos for hours daily. The
             | best of them can press _up to_ 10 keys per second without
             | thinking. For example, the fastest StarCraft II player
             | Reynor (Riccardo Romitti) can sustain 500 key presses per
             | minute, and do short busts of 800. He has videos explaining
             | how to tweak the Windows registry to achieve such rate (it
             | involves pressing some keys once and the OS autorepeats
             | faster than you can press), because it can 't be done with
             | the standard config dialogs. And you are trying to tell me
             | that you can do _double that_... not only double that,
             | "much faster" than that.
             | 
             | I dare anyone to make a script that, after launching, will
             | ask you to press Ctrl+C after a random wait between 1000
             | and 3000 ms. And record your reaction time meassured after
             | key release. It's allowed to "cheat" and have your fingers
             | ready over the two keys. Unless you jump start and get
             | lucky, you won't get better than 150ms.
        
               | adzm wrote:
               | I actually took you up on this, and best I was able to
               | get was about 250ms when I was really concentrating.
               | Average was around 320!
        
               | leeoniya wrote:
               | https://humanbenchmark.com/tests/reactiontime
               | 
               | for single mouse click, 225ms is pretty typical for me
               | after a bit of warmup. sub 200 is not consistently
               | reproducible. i dont think i've ever cracked < ~185ms
        
               | Aerroon wrote:
               | You don't make a typo, press enter and then start
               | reacting to the typo.
               | 
               | You start reacting to the typo as you're typing. You just
               | won't get to the end of your reaction before you've
               | pressed enter.
               | 
               | The point of my combo comment is that pressing Ctrl + C
               | is not the same thing as typing two random letters of a
               | random word.
               | 
               | Combine these two things and I think it's possible for
               | somebody to interrupt a command going out. The question
               | is whether you can press Ctrl+C while typing faster than
               | 100ms, not whether you can react to it within 100ms.
               | 
               | Also, people regularly type faster than the speed that
               | pro StarCraft players play at. The sc2 players need the
               | regedit because they will need to press Z 50 times in a
               | row to make 100 zerglings as fast as possible, but you
               | don't need that to type.
        
           | mjpa86 wrote:
           | There is no green light at the start - it's the lights going
           | out they react to. There's also no minimum time, you can get
           | moving after 1ms - it's legal. In fact, you can move before
           | the lights go out, there's a tolerance before you're classed
           | as moving.
        
       | politelemon wrote:
       | I agree that 'prompt' should be the value to set if you want git
       | autocorrect to work for you. I'd however want that the Y is the
       | default rather than the N, so that a user can just press enter
       | once they've confirmed it.
       | 
       | In any case it is not a good idea to have a CLI command happen
       | without your approval, even if the intention was really obvious.
        
         | misnome wrote:
         | Yes, absolutely this. If I don't want it to run, I will hit
         | ctrl-c.
        
         | junon wrote:
         | If prompt is the default, mistyped scripts will hang rather
         | than exit 1 if they have stdin open. I think that causes more
         | problems than it solves.
        
           | jzwinck wrote:
           | That's what isatty() is for. If stdin is not a TTY, prompting
           | should not be the default. Many programs change their
           | defaults or their entire behavior based on isatty().
        
             | junon wrote:
             | isatty() is spoofed in e.g. Make via PTYs. It's a light
             | check at best and lies to you at worst.
        
               | darthwalsh wrote:
               | If make is going to spoof the PTY, it should take
               | responsibility for answering the autocorrect prompt
        
               | junon wrote:
               | There's no "prompt". That's not how TTYs work. Make has
               | no idea the program is waiting for any input.
        
       | mscdex wrote:
       | This seems a bit strange to me considering the default behavior
       | is to only show a suggested command if possible and do nothing
       | else. That means they explicitly opted into the autocorrect
       | feature and didn't bother to read the manual first and just
       | guessed at how it's supposed to be used.
       | 
       | Even the original documentation for the feature back when it was
       | introduced in 2008 (v1.6.1-rc1) is pretty clear what the
       | supported values are and how they are interpreted.
        
       | dusted wrote:
       | I think it makes sense, if I typed something wrong, I often feel
       | it before I can read it, but if I already pushed enter, being
       | able to ctrl+c within 100 ms is enough to save me. I'm pretty
       | sure I've also aborted git pushes before they touched anything
       | before I put this on, but this makes it more reliable.
        
         | SOLAR_FIELDS wrote:
         | 100 ms is an insanely short window. I would say usually even
         | 1000ms would be too short for me to recognize and kill the
         | command, even if I realized immediately that I had done
         | something wrong.
        
           | jsjshsbd wrote:
           | It's much too short to read an output, interpret it and
           | realize you have to interrupt
           | 
           | But often you type something, realize it's wrong while you
           | are typing but not fast enough to stop your hand from
           | pressing [Enter]
           | 
           | That is one of the only situation 100ms would be enough to
           | safe you
           | 
           | That being said, the reason in the article for 100ms is just
           | confused commander. Why would anyone:
           | 
           | 1) encode a Boolean value as 0/1 in a human readable
           | configuration 2) encode a duration as a numeric value without
           | unit in a human readable configuration
           | 
           | Both are just lazy
        
             | Reason077 wrote:
             | > _" Why would anyone ... encode a Boolean value as 0/1 in
             | a human readable configuration"_
             | 
             | It may be lazy, but it's very common!
        
             | grayhatter wrote:
             | laziness is a virtue of a good programmer.
             | 
             | why demand many char when few char do trick?
             | 
             | also
             | 
             | > Why would anyone [...] encode a duration as a numeric
             | value without unit in a human readable configuration
             | 
             | If I'm only implementing support for a single unit, why
             | would you expect or want to provide a unit? What's the
             | behavior when you provide a unit instead of a number?
             | 
             | > but not doing that extra work is lazy
             | 
             | no, because while I'm not implementing unit parsing for a
             | feature I wouldn't use, instead I'm spending that time
             | implementing a better, faster diff algorithm. Or
             | implementing a new protocol with better security, or
             | sleeping. It's not lazy to do something important instead
             | of something irrelevant. And given we're talking about git,
             | which is already very impressive software, provided for
             | free by volunteers, I'm going to default to assuming
             | they're not just lazy.
        
             | SoftTalker wrote:
             | Absolutely. When I'm booting up an unfamiliar system and
             | trying to catch the BIOS prompt for something non-normal,
             | even 5 seconds is often too short. For me to notice that
             | the prompt has been given, read "PRESS DEL KEY TO ENTER
             | SETUP, F11 FOR BOOT OPTIONS, F12 FOR PXE BOOT" (or
             | whatever), understand it, look for the F11 key on the
             | possibly unfamilar keyboard on my crash cart, and press it,
             | can often take me more than 5 seconds. Especially if it's
             | not a single key required but a control sequence. Maybe I'm
             | slow. I always change these prompts to 10 seconds if they
             | are configurable. Or I'll make a label with the options and
             | stick it on the case so I can be prepared in advance.
        
         | Etheryte wrote:
         | Maybe worth noting here that 100ms is well under the human
         | reaction time. For context, professional sprinters have been
         | measured to have a reaction time in the ballpark of 160ms, for
         | pretty much everyone else it's higher. And this is only for the
         | reaction, you still need to move your hand, press the keys,
         | etc.
        
           | shawabawa3 wrote:
           | In this case the reaction starts before you hit enter, as
           | you're typing the command
           | 
           | So, you type `git pshu<enter>` and realise you made a typo
           | before you've finished typing. You can't react fast enough to
           | stop hitting enter but you can absolutely ctrl+c before 100
           | more ms are up
        
             | brazzy wrote:
             | > you can absolutely ctrl+c before 100 more ms are up
             | 
             | Not gonna believe that without empirical evidence.
        
               | burnished wrote:
               | I think they are talking about times where you realize a
               | mistake as you are making it as opposed to hindsight,
               | given that 100ms seems pretty reasonable.
        
               | brazzy wrote:
               | "seems pretty reasonable" is not evidence.
        
               | dusted wrote:
               | This is exactly what I'm trying to say. The actions are
               | underway by muscles (or _just_ completed) and the brain
               | catches something's off and so ctrl+c is queued.
        
               | bmacho wrote:
               | I am not sure, have you read it properly? The scenario is
               | that you are pushing enter, halfway change your mind, and
               | your are switching to ctrl+c. So it is not a reaction
               | time, but an enter to ctrl+c scenario.
               | 
               | Regarding reaction time, below 120ms (on a computer, in a
               | browser(!)) is consistently achievable, e.g. this random
               | yt video https://youtu.be/EH0Kh7WQM7w?t=45 .
               | 
               | For some reason, I can't find more official reaction time
               | measurements (by scientists, on world champion athletes,
               | e-athletes), which is surprising.
        
               | brazzy wrote:
               | That scenario seems to me fishy to begin with, is that
               | something that actually happens, or just something people
               | imagine? How would it work that you "change your mind
               | halfway through" and somehow cannot stop your finger from
               | pressing enter, but _can_ move them over and hit ctrl-c
               | in a ridiculously short time window?
               | 
               | > So it is not a reaction time, but an enter to ctrl+c
               | scenario.
               | 
               | At minimum, if we ignore the whole "changing your mind"
               | thing. And for comparison: the _world record_ for typing
               | speed (over 15 seconds and _without using any modifier
               | keys) is around 300wpm, which translates to one keypress
               | every 40ms - you really think 100ms to press two keys is
               | something "you can absolutely" do? I'd believe that
               | _some* people could _sometimes_ do it, but certainly not
               | just anyone.
        
               | dusted wrote:
               | That'd be interesting, but I don't know how to prove that
               | I'm not just "pretending" to make typos and correcting
               | them instantly ?
        
             | Etheryte wrote:
             | I'm still pretty skeptical of this claim. If you type 60
             | wpm, which is faster than an average human, but regular for
             | people who type as professionals, you spend on average
             | 200ms on a keystroke. 60 standard words per minute means
             | 300 chars per minute [0], so 5 chars per second which is
             | 200ms per char. Many people type faster than this, yes, but
             | it's all still very much pushing it just to even meet the
             | 100ms limit, and that's without any reaction or anything on
             | top.
             | 
             | [0] https://en.wikipedia.org/wiki/Words_per_minute
        
               | pc86 wrote:
               | Even if you typed 120 wpm, which is "competitive typing"
               | speed according to this thing[0], it's going to take you
               | 200ms to type ctrl+c, and even if you hit both more-or-
               | less simultaneously you're going to be above the 100ms
               | threshold. So to realistically be able to do something
               | like beat the threshold during normal work and not a
               | speed-centered environment you're probably looking at
               | regularly 160 wpm or more?
               | 
               | I'm not a competitive speed typist or anything but I
               | struggle to get above 110 on a standard keyboard and I
               | don't think I've _ever_ seen anyone above the 125-130
               | range.
               | 
               | [0] https://www.typingpal.com/en/documentation/school-
               | edition/pe...
        
               | grayhatter wrote:
               | For whatever it's worth*: I'm not skeptical of it at all.
               | I've done this in a terminal before without even looking
               | at the screen, so I know it can't have anything to do
               | with visual reaction.
               | 
               | Similar to the other reply, I also commonly do that when
               | typing, where I know I've fat fingered a word,
               | exclusively from the feeling of the keyboard.
               | 
               | But also, your not just trying to beat the fork/exec. You
               | can also successfully beat any number of things. The pre-
               | commit hook, the DNS look up, the TLS handshake. adding
               | an additional 100ms of latency to that could easily be
               | the difference between preempting some action,
               | interrupting it or noticing after it was completed.
        
               | shawabawa3 wrote:
               | I just tried it out.
               | 
               | I wrote this bash script:
               | #!/usr/bin/env bash         start_time=$(gdate +%s%3N)
               | # Function to handle Ctrl+C (SIGINT)         on_ctrl_c()
               | {             end_time=$(gdate +%s%3N)
               | total_ms=$((end_time - start_time))             #
               | Calculate integer seconds and the remaining milliseconds
               | seconds=$((total_ms / 1000))
               | millis=$((total_ms % 1000))                  # Print the
               | runtime in seconds.milliseconds             echo "Script
               | ran for ${seconds}.$(printf '%03d' ${millis}) seconds."
               | exit 0         }              # Trap Ctrl+C (SIGINT) and
               | call on_ctrl_c         trap on_ctrl_c INT              #
               | Keep the script running indefinitely         while true;
               | do             sleep 1         done
               | 
               | And then i typed "bash sleep.sh git push origin
               | master<enter><ctrl+C>"
               | 
               | and got "Script ran for 0.064 seconds."
        
               | tokai wrote:
               | Typing is not string of reactions to stimuli.
        
             | yreg wrote:
             | Let's say you are right. What would be a reason for
             | pressing ctrl+c instead of letting the command go through
             | in your example?
             | 
             | The delay is intended to let you abort execution of an
             | autocorrected command, but without reading the output you
             | have no idea how the typos were corrected.
        
             | dusted wrote:
             | Yes exactly! This is what I'm trying to argue as well, it
             | happens quite often for me that I submit a typo because
             | it's already "on it's way out" when I catch it (but before,
             | or about the same time it's finished and enter is pressed),
             | so the ctrl+c is already on it's way :)
        
           | dusted wrote:
           | There are different ways to measure reaction time.
           | Circumstance is important.
           | 
           | Reaction to unreasonable, unexpected events will be very slow
           | due to processing and trying to understand what happens and
           | how to respond. Examples, you are a racecar driver,
           | participating in a race, you're driving your car on a
           | racetrack in a peaceful country.
           | 
           | An armed attack: Slow reaction time, identifying the
           | situation will take a long time, selecting an appropriate
           | response will take longer.
           | 
           | A kid running into the road on the far side of the audience
           | stands: Faster.
           | 
           | Kid running into the road near the audience: Faster.
           | 
           | Car you're tailing braking with no turn to come: Faster.
           | 
           | Crashed car behind a turn with bad overview: Faster.
           | 
           | Guy you're slipstreaming braking before a turn: Even faster.
           | 
           | For rhythm games, you anticipate and time the events, and so
           | you can say these are no longer reactions, but actions.
           | 
           | In the git context, where you typed something wrong, the
           | lines are blurred, you're processing while you're acting,
           | you're typing while you're evaluating what you're typing,
           | first line of defence is you're feeling/sensing that you
           | typed wrong, either from the feedback that your fingers
           | touched too many keys, or that you felt the rhythm of your
           | typing was wrong, at least for me, this happens way faster
           | than my visual input. I'm making errors as I type this, and
           | they're corrected faster than I can really read it, sometimes
           | I get it wrong and deleted a word that was correct. But
           | still, watching people type, I see this all the time, they're
           | not watching and thinking about the letters exclusively,
           | there's something going on in their minds at the same time.
           | 100 ms is a rather wide window in this context.
           | 
           | Also, that said, we did a lot of experiments at work with a
           | reaction time tester, most people got less than 70 ms after
           | practice (a led lights up at a random interval between 2 and
           | 10 seconds)
        
             | tomatotomato37 wrote:
             | I also want to add in the context of human sprinters & F1
             | drivers, their reaction time is measured via leg actuation,
             | which for a creature evolved to be an object-throwing
             | endurance hunter is going to have worse neural & muscular
             | latency than, say, your forearm. That is why using your
             | finger to trigger a response in a conventional computer
             | time tester can get such high speeds, cause we're
             | essentially evolved for it.
        
         | dankwizard wrote:
         | Neo, get off HN and go destroy the agents!
        
         | frde_me wrote:
         | But the point here is not that you need to realize you typed
         | something wrong and then cancel (in that case just don't enable
         | the setting if you always want to abort). The point is that you
         | need to decide if the autocorrect suggestion was the right one.
         | Which you can't know until it tells you what it wants to
         | autocorrect to.
        
         | tokai wrote:
         | You are talking about an anticipatory response. Human response
         | have been studied extensively and it is broadly accepted that
         | ~100ms is the minimum for physiological processing and motor
         | response to stimuli. If you feel you go faster you are
         | anticipating your reaction.
        
       | Theodores wrote:
       | 0.1 seconds is a long time in drag racing where the timing tree
       | is very different to F1. With F1 there are the five red lights
       | that have to go out, and the time this takes is random.
       | 
       | With your git commands it is fairly predictable what happens
       | next, it is not as if the computer is randomly taunting you with
       | five lights.
       | 
       | I suggest a further patch where you can put git in either 'F1
       | mode', or, for our American cousins, 'Drag Strip mode'. This puts
       | it in to a confirmation mode for everything, where the whole
       | timing sequence is shown in simplified ASCII art.
       | 
       | As a European, I would choose 'F1 mode' to have the give lights
       | come on in sequence, wait a random delay and then go out, for
       | 'git push' to happen.
       | 
       | I see no reason to also have other settings such as 'Ski Sunday
       | mode', where it does the 'beep beep beep BEEEP' of the skiing
       | competition. 'NASA mode' could be cool too.
       | 
       | Does anyone have any other timing sequences that they would like
       | to see in the next 'patch'?
        
       | cardamomo wrote:
       | Reading this post, the term "software archeology" and "programmer
       | archeologist" come to mind. (Thank you, Vernor Vinge, for the
       | latter concept.)
        
         | schacon wrote:
         | I can't help but feel like you're calling me "old"...
        
           | cardamomo wrote:
           | Not my intention! Just an esteemed git archeologist
        
         | choult wrote:
         | I like to say that the danger of software archaeology is the
         | inevitable discovery of coprolites...
        
         | scubbo wrote:
         | Grrrr, this is such a bugbear for me. I was so excited to read
         | "A Fire Upon The Deep" because hackers talked up the concept of
         | "software archeology" that the book apparently introduced.
         | 
         | The concept is briefly alluded to in the prologue, and
         | then...nada, not relevant to the rest of the plot at all (the
         | _effects_ of the archeology are, but "software archeologists"
         | are not meaningful characters in the narrative). I felt bait-
         | and-switched.
        
       | tester756 wrote:
       | Yet another example where git shows its lack of user-friendly
       | design
        
         | hinkley wrote:
         | Well it is named after its author after all.
        
           | yreg wrote:
           | At first I thought this is unnecessary name-calling, but
           | apparently Linus has also made the same joke:
           | 
           | > "I'm an egotistical bastard, and I name all my projects
           | after myself. First Linux, now git."
        
       | Pxtl wrote:
       | Pet peeve: Timespan configs that don't include the unit in the
       | variable name.
       | 
       | I'm so sick of commands with --timeout params where I'm left
       | guessing if it's seconds or millis or what.
        
         | echoangle wrote:
         | Alternatively, you can also accept the value with a unit and
         | return an error when a plain number is entered (so --timeout 5s
         | or --timeout 5h is valid but --timeout 5 returns an error).
        
         | hinkley wrote:
         | Be it seconds or milliseconds, eventually your program evolves
         | to need tenths or less of that unit and you can either support
         | decimal points, create a new field and deprecate the old one,
         | or do a breaking change that makes the poor SOB that needs to
         | validate a breaking change-bearing upgrade in production before
         | turning it on get a migraine if they have to toggle back and
         | forth more than a couple times. Code isn't always arranged so
         | that a config change and a build/runtime change can be tucked
         | into a single commit that can be applied or rolled back
         | atomically.
         | 
         | All because someone thought surely nobody would ever want
         | something to happen on a quarter of a second delay/interval, or
         | a 250 microsecond one.
        
         | cratermoon wrote:
         | I'll bounce in with another iteration of my argument for
         | avoiding language primitive types and always using domain-
         | appropriate value types. A Duration is not a number type,
         | neither float or integer. It may be _implemented_ using
         | whatever primitive the language provides, but for timeouts and
         | sleep, what is 1 Duration? The software always encodes some
         | definition of 1 unit in the time domain, make it clear to the
         | user or programmer.
        
         | skykooler wrote:
         | I spent a while debugging a library with a chunk_time_ms
         | parameter where it turned out "ms" stood for "microseconds".
        
           | grayhatter wrote:
           | I have a very hard time relating to everyone else complaining
           | about ~~lack of units~~ being unable to read/remember API
           | docs. But using `chunk_time_ms` where ms is MICROseconds?!
           | That's unforgivable, and I hope for all our sakes, you don't
           | have to use that lib anymore! :D
        
             | Pxtl wrote:
             | The sheer number of APIs of modern coding is exhausting, I
             | can't imagine either trying to keep all the stuff I'm using
             | in my head or having to go back to the docs every time
             | instead of being able to just read the code.
        
               | grayhatter wrote:
               | do you primarily write rust, or js?
        
       | userbinator wrote:
       | IMHO this is a great example of "creeping featurism". At best it
       | introduces unnecessary complexity, and at worst those reliant on
       | it will be encouraged to pay less attention to what they're
       | doing.
        
         | cedws wrote:
         | That's git in a nutshell. An elegant data structure masked by
         | many layers of unnecessary crap that has accumulated over the
         | years.
        
         | snowfarthing wrote:
         | What I don't get is why anyone would want to allow the
         | automation. Is it _really_ that difficult to use the up-arrow
         | key and correct the mistake? Doing something automatically when
         | it 's sort-of correct is a recipe for doing things you didn't
         | intend to do.
        
           | dtgriscom wrote:
           | Double this. If I don't type the command that I want, I
           | _never_ want my computer guessing and acting on that guess.
           | Favors like that are why I hate Microsoft Word ( "Surely you
           | didn't mean XXXX; I'll help you by changing it to YYYY. Oh,
           | you did it again, and in the same place? Well, I'll fix it
           | again for you. High five!")
        
           | userbinator wrote:
           | Things seem to be going in that direction with LLMs,
           | unfortunately.
        
       | pmontra wrote:
       | According to Formula 1 web site drivers start on average after
       | 0.2 seconds since the red lights go out
       | https://www.formula1.com/en/latest/article/rapid-decisions-d...
       | 
       | Anyway, 0.1 seconds would be far too short even for them, which
       | have a job based on fast reaction times.
        
       | Reason077 wrote:
       | Deciseconds is such an oddball choice of units. Better to specify
       | the delay in either milliseconds or seconds - either are far more
       | commonly used in computing.
        
         | cobbal wrote:
         | It's a decent, if uncommon, unit for human reactions. The
         | difference between 0 and 1 seconds is a noticeably long time to
         | wait for something, but the difference between n and n+1
         | milliseconds is too fine to be useful.
        
           | jonas21 wrote:
           | Milliseconds are a commonly-used unit. It doesn't really
           | matter if 1 ms is too fine a granularity -- you'll just have
           | to write "autocorrect = 500" in your config file instead of
           | "autocorrect = 5", but who cares?
        
             | zxvkhkxvdvbdxz wrote:
             | Sure, yes. But for human consumption, decisecond is
             | something one can relate to.
             | 
             | I mean, you probably cannot sense the difference in
             | duration between 20 and 30 ms without special equipment.
             | 
             | But you can possibly sense the difference between 2 and 3
             | deciseconds (200 ms and 300 ms) after some practice.
             | 
             | I think the issue in this case was rather the retrofitting
             | a boolean setting into a numerical setting.
        
               | fragmede wrote:
               | The difference between 20 ms and 30ms is the difference
               | between 33 fps and 50 fps which is entirely noticable on
               | a 1080p60hz screen.
        
               | adzm wrote:
               | > But you can possibly sense the difference between 2 and
               | 3 deciseconds (200 ms and 300 ms) after some practice.
               | 
               | At 120bpm a sixteenth note is 125ms, the difference is
               | very obvious I would think
        
               | LocalH wrote:
               | And then you have the rhythm gamers who can adjust their
               | inputs by 5 or 10ms. Hell, I'm not even that good of a
               | player, but in Fortnite Festival, which has a perfect
               | indicator whenever you're within 50ms of the target note
               | timestamp (and a debug display that shows you a running
               | average input offset) and I can easily adjust my play to
               | be slightly earlier or slightly later and watch my
               | average fall or climb.
               | 
               | Several top players have multiple "perfect full combos"
               | under their belt, where they hit _every_ note in the song
               | within 50ms of the target. I even have one myself on one
               | of the easier songs in the game.
        
             | bmicraft wrote:
             | If you're going to store that unit in one byte (possible
             | even signed) suddenly deci-seconds start making a lot of
             | sense
        
               | lionkor wrote:
               | Why would you do that?
        
           | bobbylarrybobby wrote:
           | But the consumers of the API aren't humans, they're
           | programmers.
        
         | ralgozino wrote:
         | I got really confused for a moment, thinking that "deciseconds"
         | was some git-unit meaning "seconds needed to make a decision",
         | like in "decision-seconds" xD
         | 
         | Note: english is not my mother tongue, but I am from the
         | civilised part of the world that uses the metric system FWIW.
        
           | legacynl wrote:
           | I get where your coming from, although deci is certainly
           | used, it's rare enough to not expect it, especially in the
           | context of git
        
           | ssernikk wrote:
           | I thought of the same thing!
        
       | theginger wrote:
       | Reaction times differ by types of stimulus, auditory is slightly
       | faster than visual and tactile slightly faster than that at 90 -
       | 180 ms So if git gave you a slap instead of an error message you
       | might just about have time to react.
        
         | orangepanda wrote:
         | The slapping device would need to build inertia for you to feel
         | the slap. Is 10ms enough for that?
        
           | dullcrisp wrote:
           | I think if it's spring-loaded then definitely. (But it's
           | 100ms, not 10ms.)
        
             | orangepanda wrote:
             | Assuming the best case scenario of feeling the slap in
             | 90ms, it would leave 10ms to abort the command. Or did the
             | 90-180ms range refer to something else?
        
               | dullcrisp wrote:
               | Oh I see, you're right.
        
           | Aerroon wrote:
           | This is why any reasonable engineer would go with zaps
           | instead of slaps!
        
       | moogly wrote:
       | So Mercurial had something like this back in ancient times, but
       | git devs decided to make a worse implementation.
        
       | snet0 wrote:
       | This seems like really quite bad design.
       | 
       | EDIT: 1) is the result of my misreading of the article, the
       | "previous value" never existed in git.
       | 
       | 1) Pushing a change that silently break by reinterpreting a
       | previous configuration value (1=true) as a different value
       | (1=0.100ms confirmation delay) should pretty much always be
       | avoided. Obviously you'd want to clear old values if they existed
       | (maybe this did happen? it's unclear to me), but you also
       | probably want to rename the configuration label..
       | 
       | 2) Having `help.autocorrect`'s configuration argument be a time,
       | measured in a non-standard (for most users) unit, is just plainly
       | bad. Give me a boolean to enable, and a decimal to control the
       | confirmation time.
        
         | iab wrote:
         | "Design" to me intimates an intentional broad-context plan.
         | This is no design, but an organic offshoot
        
           | snet0 wrote:
           | Someone thought of a feature (i.e. configurable autocorrect
           | confirmation delay) and decided the interface should be
           | identical to an existing feature (i.e. whether autocorrect is
           | enabled). In my thinking, that second part is "design" of the
           | interface.
        
             | iab wrote:
             | I think that is something that arose from happenstance, not
             | thoughtful intent - this is true because of how confusing
             | the end result is.
        
         | jsnell wrote:
         | For point 1, I think you're misunderstanding the timeline. That
         | change happened in 2008, during code review of the initial
         | patch to add that option as a boolean, and before it was ever
         | committed to the main git tree.
        
       | meitham wrote:
       | Really enjoyable read
        
       | catlifeonmars wrote:
       | I enabled autocorrect (set a 3sec) a year ago and have the
       | following observations about it:
       | 
       | 1. it does not distinguish between dangerous and safe actions
       | 
       | 2. it pollutes my shell history with mistyped commands
       | 
       | Reading this article gave me just enough of a nudge to just
       | disable it after a year.
        
         | layer8 wrote:
         | If anything, it's better to set up aliases for frequent typos.
         | (Still "pollutes" the shell history of course.)
        
         | darkwater wrote:
         | About 2, well, _you_ are the actual polluter, even if you just
         | scroll back in history andnuse the same last wrong command
         | because it works anyway.
        
           | catlifeonmars wrote:
           | Well to put it into context, I use fish shell, which will
           | only save commands that have an exit code of 0. By using git
           | autocorrect, I have guaranteed that all git commands have an
           | exit code of 0 :)
        
             | fsckboy wrote:
             | wow, our brains work differently, how can you smile in that
             | circumstance? :)
             | 
             | It's a terrible idea of fish not to save errors in history
             | (even if the way bash does it is not optimal,
             | ignoring/obliterating the error return fact) because
             | running a command to look up the state of something can
             | easily return the state you are checking along with an
             | error code. "What was that awesome three letter TLD I
             | looked up yesterday that was available? damn, not a valid
             | domain is an error code" and just like that SEX.COM slips
             | through your grasp, and your only recourse would be to
             | hijack it.
             | 
             | but it's compoundedly worse to feel like the problem is
             | solved by autocorrect further polluting your history.
             | 
             | I would not want to be fixing things downstream of you,
             | where you would be perfectly happy downstream of me.
        
           | bobbylarrybobby wrote:
           | The issue is if you accept the wrong command instead of
           | retyping it correctly, you never get the correctly spelled
           | command into your history -- and even worse, you don't get it
           | to be more recent than the mistyped command.
        
       | mmcnl wrote:
       | The most baffling thing is that someone implemented deciseconds
       | as a unit of time. Truly bizarre.
        
       | WalterBright wrote:
       | > Originally, if you typed an unknown command, it would just say
       | "this is not a git command".
       | 
       | Back in the 70s, Hal Finney was writing a BASIC interpreter to
       | fit in 2K of ROM on the Mattel Intellivision system. This meant
       | every byte was precious. To report a syntax error, he shortened
       | the message for all errors to:                   EH?
       | 
       | I still laugh about that. He was quite proud of it.
        
         | WalterBright wrote:
         | I've been sorely tempted to do that with my compiler many
         | times.
        
         | nikau wrote:
         | How wasteful, ed uses just ? for all errors, a 3x saving
        
           | ekidd wrote:
           | Ed _also_ uses  "?" for "Are you sure?" If you're sure, you
           | can type the last command a second time to confirm.
           | 
           | The story goes that ed was designed for running over a slow
           | remote connection where output was printed on paper, and the
           | keyboard required very firm presses to generate a signal.
           | Whether this is true or folklore, it would explain a lot.
           | 
           | GNU Ed actually has optional error messages for humans,
           | because why not.
        
             | p_l wrote:
             | /bin/ed did in fact evolve on very slow teletypes that used
             | roll paper.
             | 
             | It made the option to print file content with line numbers
             | very useful (personally only used very dumb terminals
             | instead of physical teletype, but experience is a bit
             | similar just with shorter scrollback :D)
        
               | euroderf wrote:
               | Can confirm. Using ed on a Texas Instruments dial-up
               | terminal (modem for phone handset) with a thermal
               | printer.
               | 
               | And taking a printed listing before heading home with the
               | terminal.
        
             | teraflop wrote:
             | https://www.gnu.org/fun/jokes/ed-msg.en.html
             | 
             | "Note the consistent user interface and error reportage. Ed
             | is generous enough to flag errors, yet prudent enough not
             | to overwhelm the novice with verbosity."
        
               | fsckboy wrote:
               | > _not to overwhelm the novice with verbosity_
               | 
               | that doesn't make complete sense, in unixland it's old-
               | timers who understand the beauty of silence and brevity,
               | while novices scan the screen/page around the new prompt
               | for evidence that something happened
        
               | Vinnl wrote:
               | If I didn't know any better, I'd have thought they
               | weren't entirely serious.
        
               | kstrauser wrote:
               | Ed helps induce novices into the way of the old-timers
               | because it loves them and wants them to be happy.
        
             | llm_trw wrote:
             | So much of computer conventions evolved for very good
             | reasons because of physical limitations.
             | 
             | When each line of code was it's own punch card having a {
             | stand alone on a line was somewhere between stupid and
             | pointless. Also explains the reason why lisps were so hated
             | for so long.
             | 
             | By the same token today you can tell which projects use an
             | IDE as the only way to code them because of the terrible
             | documentation. It is after all not the end of the world to
             | have to read a small function when you can just tab to see
             | it. Which is true enough until you end up having those
             | small functions calling other small functions and you're in
             | a stack 30 deep trying to figure out where the option you
             | passed at the top went.
        
             | miohtama wrote:
             | Here is a good YT channel on such computers and terminals.
             | 
             | Not only story, some of these running resurrected today.
             | 
             | https://youtu.be/zeL3mbq1mEg?si=gUUO_nEsMtcZ5Z_A
        
           | nine_k wrote:
           | There are really few systems where you can save a _part_ of a
           | byte! And if you need to output a byte anyway, it doesn 't
           | matter which byte it is. So you can indulge and use "?", "!",
           | "*", or even "&" to signify various types of error
           | conditions.
           | 
           | (On certain architectures, you could use 1-byte soft-
           | interrupt opcodes to call the most used subroutine, but 8080
           | lacked it IIRC; on 6502 you could theoretically use BRK for
           | that. But likely you had other uses for it than printing
           | error diagnostics.)
        
         | nl wrote:
         | It'd be interesting and amusing if he'd made the private key to
         | his part of Bitcoin a variation on that.
         | 
         | RIP.
        
         | vunderba wrote:
         | > EH?
         | 
         | I feel like that would also make a good response from the text
         | parser in an old-school interactive fiction game.
         | 
         | Slightly related, but I remember some older variants of BASIC
         | using "?" to represent the PRINT statement - though I think it
         | was less about memory and more just to save time for the
         | programmer typing in the REPL.
        
           | chuckadams wrote:
           | It was about saving memory by tokenizing keywords: '?' is how
           | PRINT actually was stored in program memory, it just rendered
           | as 'PRINT'. Most other tokens were typically the first two
           | characters, the first lowercase, the second uppercase: I
           | remember LOAD was 'lO' and DATA was 'dA', though on the C64's
           | default character glyphs they usually looked like L<box char
           | HN won't render> and D<spade suit char>.
           | 
           | All this being on a C64 of course, but I suspect most
           | versions of Bill Gates's BASIC did something similar.
        
             | egypturnash wrote:
             | C64 basic was tokenized into one byte, with the most
             | significant bit set:
             | https://www.c64-wiki.com/wiki/BASIC_token
             | 
             | Each command could be typed in two ways: the full name, or
             | the first two letters, with the second capitalized. Plus a
             | few exceptions like "?" turning into the PRINT token ($99,
             | nowhere near the PETSCII value for ?) and p becoming $FF.
             | 
             | The tokens were expanded into full text strings when you
             | would LIST the program. Which was always amusing if you had
             | a _very_ dense multi-statement line that expanded as longer
             | than the 80 characters the c64 's tokenizer routine could
             | handle, you'd have to go back and replace some or all
             | commands with the short form before you could edit it.
        
               | mkesper wrote:
               | As far as I remember you couldn't even run these programs
               | after listing anymore.
        
               | LocalH wrote:
               | You could run them just fine as long as you didn't try to
               | edit the listed lines if they were longer than two screen
               | lines. The same is true for a C128 in C128 mode, except
               | the limit is extended to 160 characters (four 40-column
               | lines).
        
               | vrighter wrote:
               | the zx spectrum did this too, except you could only type
               | the "short forms" (which were always rendered in full).
               | It had keywords on its keys. I.e. to type print, you had
               | to press the "print" key.
        
             | fragmede wrote:
             | D
        
         | furyofantares wrote:
         | I run a wordle spinoff, xordle, which involves two wordle
         | puzzles on one board. This means you can guess a word and get
         | all 5 letters green, but it isn't either of the target words.
         | When you do this it just says "Huh?" on the right. People love
         | that bit.
        
           | speerer wrote:
           | Can confirm. I loved that bit.
        
           | dotancohen wrote:
           | > People love that bit.
           | 
           | Add another seven Easter eggs, and people could love that
           | byte.
        
         | zubairq wrote:
         | Pretty cool.. I had no idea Hal was such a hacker on the
         | personal computers in those days... makes me think of Bitcoin
         | whenever I hear Hal mentioned
        
           | WalterBright wrote:
           | He wasn't hacking. Hal worked for Aph, and Aph contracted
           | with Mattel to deliver console game cartridges.
           | 
           | There was once a contest between Caltech and MIT. Each was to
           | write a program to play Gomoku, and they'd play against each
           | other. Hal wrote a Gomoku-playing program in a weekend, and
           | it trashed MIT's program.
           | 
           | It was never dull with Hal around.
        
         | euroderf wrote:
         | Canadians everywhere.
        
         | cma wrote:
         | Earliest I've seen with 'Eh?' as an interpreter response is
         | RAND's JOSS:
         | 
         | https://en.wikipedia.org/wiki/JOSS#/media/File:JOSS_Session....
         | 
         | https://en.wikipedia.org/wiki/JOSS
         | 
         | They had about 5KB of memory but comparing to the Intellivision
         | the machine weighed about 5,000lbs.
        
         | dredmorbius wrote:
         | ed (the standard editor) optimises that by a further 66.7%.
         | 
         | <https://www.gnu.org/fun/jokes/ed-msg.html>
        
       | zX41ZdbW wrote:
       | > Which was what the setting value was changed to in the patch
       | that was eventually accepted. This means that setting
       | help.autocorrect to 1 logically means "wait 100ms (1 decisecond)
       | before continuing".
       | 
       | The mistake was here. Instead of retargeting the existing setting
       | for a different meaning, they should have added a new setting.
       | help.autocorrect - enable or disable
       | help.autocorrect.milliseconds - how long to wait
       | 
       | There are similar mistakes in other systems, e.g., MySQL has
       | innodb_flush_log_at_trx_commit
       | 
       | which can be 0 if disabled, 1 if enabled, and 2 was added as
       | something special.
        
         | stouset wrote:
         | The "real" issue is an untyped configuration language which
         | tries to guess at what you actually meant by 1. They're
         | _tripling_ down on this by making 1 a Boolean true but other
         | integers be deciseconds. This is the same questionable logic
         | behind YAML's infamous "no" == false.
        
           | Dylan16807 wrote:
           | I'd say the new addition is more of a special case of
           | rounding than it is messing up types.
        
             | stouset wrote:
             | 1 was also accepted as a Boolean true in this context, and
             | it still is in other contexts.
        
               | Dylan16807 wrote:
               | > 1 was also accepted as a Boolean true in this context,
               | and it still is in other contexts.
               | 
               | Is "was" before the change described at the end of the
               | article, or after it?
               | 
               | Before the change, any positive number implied that the
               | feature is on, because that's the only thing that makes
               | sense.
               | 
               | After the change, you _could_ say that 1 stops being
               | treated as a number, but it 's simpler to say it's still
               | being treated as a number and is getting rounded down.
               | The interpretation of various types is still messy, but
               | it didn't get _more_ messy.
        
               | stouset wrote:
               | In an earlier iteration the configuration value was
               | Boolean true/false. A 1 was interpreted as true. They
               | changed it to an integral value. This is the entire setup
               | for the problem in the article.
               | 
               | Elsewhere, 1 is still allowed as a true equivalent.
        
               | Dylan16807 wrote:
               | But then they made it _not_ be a boolean when they added
               | the delay. They went the opposite direction and it caused
               | problems. How is this a situation of  "tripling down"? It
               | seems to me like they couldn't make up their mind.
        
               | stouset wrote:
               | The only reason they even need this further hack is
               | because people can reasonably assume that 1 is bool.
               | 
               | Now, because of this confusion, they're special-casing 1
               | to actually mean 0. But other integers are still
               | themselves. They've also now added logic to make "yes",
               | "no", "true", "off" _strings_ be interpreted as booleans
               | now too.
        
           | JBiserkov wrote:
           | NO is the country code for Norway.
        
             | xnorswap wrote:
             | "The Norway Problem":
             | https://hitchdev.com/strictyaml/why/implicit-typing-
             | removed/
        
         | smaudet wrote:
         | Not sure where the best place to mention would be, but 0.1
         | deciseconds is not unreasonable, either...yes fastest recorded
         | _random_ reaction time maybe 1.5 ds (coincidentally this is the
         | average gamer reaction time), however non-random reaction times
         | can be much faster (e.g. on a beat).
         | 
         | So if you wanted to go that fast, you could, the invokation
         | should have relatively stable speeds (order of some
         | milliseconds...
        
       | IshKebab wrote:
       | > Junio came back to request that instead of special casing the
       | "1" string, we should properly interpret any boolean string value
       | (so "yes", "no", "true", "off", etc)
       | 
       | The fact that this guy has been the Git maintainer for so long
       | and designs settings like this explains a lot!
        
       | moffkalast wrote:
       | > As some of you may have guessed, it's based on a fairly simple,
       | modified Levenshtein distance algorithm
       | 
       | One day it'll dump the recent bash and git history into an LLM
       | that will say something along the lines of "alright dumbass
       | here's what you actually need to run"
        
       | snvzz wrote:
       | At 60fps that's 6 frames, which is plenty.
       | 
       | That aside, I feel the reason is to advertise the feature so that
       | the user gets a chance to set the timer up to his preference or
       | disable autocorrect entirely.
        
         | ninjamuffin99 wrote:
         | 6 frames is not enough to realize you made a typo / read
         | whatever git is outputting telling you that you made a typo,
         | and then respond to that input correctly.
         | 
         | in video games it may seem like a lot of time for a reaction,
         | but a lot of that "reaction time" is based off previous context
         | of the game, visuals and muscle memory and whatnot. If playing
         | street fighter and say youre trying to parry an attack that has
         | a 6 frame startup, you're already anticipating an attack to
         | "react" to before their attack even starts. When typing git
         | commands, you will never be on that type of alert to anticipate
         | your typos.
        
           | snvzz wrote:
           | >6 frames is not enough
           | 
           | git good.
           | 
           | (the parent post was a set up for this)
        
       | physicles wrote:
       | The root cause here is poorly named settings.
       | 
       | If the original setting had been named something bool-y like
       | `help.autocorrect_enabled`, then the request to accept an int
       | (deciseconds) would've made no sense. Another setting
       | `help.autocorrect_accept_after_dsec` would've been required. And
       | `dsec` is so oddball that anyone who uses it would've had to look
       | up.
       | 
       | I insist on this all the time in code reviews. Variables must
       | have units in their names if there's any ambiguity. For example,
       | `int timeout` becomes `int timeout_msec`.
       | 
       | This is 100x more important when naming settings, because they're
       | part of your public interface and you can't ever change them.
        
         | yencabulator wrote:
         | I do that, but I can't help thinking that it smells like
         | Hungarian notation.
         | 
         | The best alternative I've found is to accept units in the
         | values, "5 seconds" or "5s". Then just "1" is an incorrect
         | value.
        
           | physicles wrote:
           | That's not automatically bad. There are two kinds of
           | Hungarian notation: systems Hungarian, which duplicates
           | information that the type system should be tracking; and apps
           | Hungarian, which encodes information you'd express in types
           | if your language's type system were expressive enough. [1]
           | goes into the difference.
           | 
           | [1] https://www.joelonsoftware.com/2005/05/11/making-wrong-
           | code-...
        
             | yencabulator wrote:
             | And this is exactly the kind the language should have a
             | type for, Duration.
        
               | crazygringo wrote:
               | Not really.
               | 
               | I don't want to have a type for an integer in seconds, a
               | type for an integer in minutes, a type for an integer in
               | days, and so forth.
               | 
               | Just like I don't want to have a type for a float that
               | means width, and another type for a float that means
               | height.
               | 
               | Putting the _unit_ (as oppose to the _data type_ ) in the
               | variable name is helpful, and is not the same as types.
               | 
               | For really complicated stuff like dates, sure make a type
               | or a class. But for basic dimensional values, that's
               | going way overboard.
        
               | yencabulator wrote:
               | > I don't want to have a type for an integer in seconds,
               | a type for an integer in minutes, a type for an integer
               | in days, and so forth.
               | 
               | This is not how a typical Duration type works.
               | 
               | https://pkg.go.dev/time#Duration
               | 
               | https://doc.rust-
               | lang.org/nightly/core/time/struct.Duration....
               | 
               | https://docs.rs/jiff/latest/jiff/struct.SignedDuration.ht
               | ml
        
               | crazygringo wrote:
               | I'm just saying, this form of "Hungarian" variable names
               | is useful, to always include the unit.
               | 
               | Not everything should be a type.
               | 
               | If all you're doing is calculating the difference between
               | two calls to time(), it can be much more straightforward
               | to call something "elapsed_s" or "elapsed_ms" instead of
               | going to all the trouble of a Duration type.
        
         | TeMPOraL wrote:
         | > _I insist on this all the time in code reviews. Variables
         | must have units in their names if there 's any ambiguity. For
         | example, `int timeout` becomes `int timeout_msec`._
         | 
         | Same here. I'm still torn when this gets pushed into the type
         | system, but my general rule of thumb in C++ context is:
         | void FooBar(std::chrono::milliseconds timeout);
         | 
         | is OK, because that's a function signature and you'll see the
         | type when you're looking at it, but with variables, `timeout`
         | is not OK, as 99% of the time you'll see it used like:
         | auto timeout = gl_timeout; // or GetTimeoutFromSomewhere().
         | FooBar(timeout);
         | 
         | Common use of `auto` in C++ makes it a PITA to trace down exact
         | type when it matters.
         | 
         | (Yes, I use IDE or a language-server-enabled editor when
         | working with C++, and no, I don't have time to stop every 5
         | seconds to hover my mouse over random symbols to reveal their
         | types.)
        
           | physicles wrote:
           | Right, your type system can quickly become unwieldy if you
           | try to create a new type for every slight semantic
           | difference.
           | 
           | I feel like Go strikes a good balance here with the
           | time.Duration type, which I use wherever I can (my _msec
           | example came from C). Go doesn't allow implicit conversion
           | between types defined with a typedef, so your code ends up
           | being very explicit about what's going on.
        
           | theamk wrote:
           | It should not matter though, because std::chrono is not int-
           | convertible - so is it "milliseconds" or "microseconds" or
           | whatever is an minor implementation detail.
           | 
           | You cannot compile FooBar(5000), so there is never confusion
           | in C++ like C has. You have to do explicit
           | "FooBar(std::chrono::milliseconds(500))" or "FooBar(500ms)"
           | if you have literals enabled. And this will handle conversion
           | if needed - you can always do FooBar(500ms) and it will work
           | even if actual type in microseconds.
           | 
           | Similarly, your "auto" example will only compile if
           | gl_timeout is a compatible type, so you don't have to worry
           | about units at all when all your intervals are using
           | std::chrono.
        
           | OskarS wrote:
           | One of my favorite features of std::chrono (which can be a
           | pain to use, but this part is pretty sweet) is that you don't
           | have to specify the exact time unit, just a generic duration.
           | So, combined with chrono literals, both of these work just
           | like expected:
           | std::this_thread::sleep_for(10ms); // sleep for 10
           | milliseconds         std::this_thread::sleep_for(1s);   //
           | sleep for one second
           | std::this_thread::sleep_for(50);   // does not work, unit is
           | required by type system
           | 
           | That's such a cool way to do it: instead of forcing you to
           | specify the exact unit in the signature (milliseconds or
           | seconds), you just say that it's a time duration of some
           | kind, and let the user of the API pick the unit. Very neat!
        
             | twic wrote:
             | I do something similar in Java by taking a
             | java.time.Duration in any method dealing with time. We
             | don't have the snazzy literal syntax, but that means users
             | have to write:
             | someMethodDealingWithTime(Duration.ofMillis(10));
             | someMethodDealingWithTime(Duration.ofSeconds(1));
             | someMethodDealingWithTime(50); // does not compile
             | 
             | Since these often come from config, i also have a method
             | parseDuration which accepts a variety of simple but
             | unambiguous string formats for these, like "10ms", "1s",
             | "2h30m", "1m100us", "0", "inf", etc. So in config we can
             | write:                 galactus.requestTimeout=30s
             | 
             | No need to bake the unit into the name, but also less
             | possibility of error.
        
               | TeMPOraL wrote:
               | > _i also have a method parseDuration which accepts a
               | variety of simple but unambiguous string formats for
               | these, like "10ms", "1s", "2h30m", "1m100us", "0", "inf",
               | etc._
               | 
               | I did that too with parsers for configuration files; my
               | rule of thumb is that the unit has to always be visible
               | somewhere anywhere a numeric parameter occurs - in the
               | type, in the name, or in the value. Like e.g.:
               | // in config file:       { ..., "timeout": "10 seconds",
               | ... }            // in parsing code:       auto&
               | ParseTimeout(const std::string&) ->
               | Expected<std::chrono::milliseconds>;            // in a
               | hypothetical intermediary if, for some reason, we need to
               | use a standard numeric type:       int timeoutMsec =
               | ....;
               | 
               | Wrt. string formats, I usually allowed multiple variants
               | for a given time unit, so e.g. all these were valid and
               | equivalent values: "2h", "2 hour", "2 hours". I'm still
               | not convinced it was the best idea, but the Ops team
               | appreciated it.
               | 
               | (I didn't allow mixing time units like "2h30m" in your
               | example, as to simplify parsing into single "read double,
               | read rest as string key into a lookup table" pass, but
               | I'll think about allowing it the next time I'm in such
               | situations. Are there any well-known pros/cons to this?)
        
           | codetrotter wrote:
           | > Yes, I use IDE or a language-server-enabled editor when
           | working with C++, and no, I don't have time to stop every 5
           | seconds to hover my mouse over random symbols to reveal their
           | types.
           | 
           | JetBrains does a great thing where they show types for a lot
           | of things as labels all the time instead of having to hover
           | over all the things.
        
             | TeMPOraL wrote:
             | Right; the so-called "inlay hints" are also provided by
             | clangd over LSP, so I have them in my Emacs too. Super
             | helpful, but not always there when I need them.
        
         | bmicraft wrote:
         | > Variables must have units in their names if there's any
         | ambiguity
         | 
         | Then you end up with something where you can write
         | "TimoutSec=60" as well as "TimeoutSec=1min" in the case of
         | systemd :)
         | 
         | I'd argue they'd been better of not putting the unit there. But
         | yes, aside from that particular weirdness I fully agree.
        
           | physicles wrote:
           | > Then you end up with something where you can write
           | "TimoutSec=60" as well as "TimeoutSec=1min" in the case of
           | systemd :)
           | 
           | But that's wrong too! If TimeoutSec is an integer, then don't
           | accept "1min". If it's some sort of duration type, then don't
           | call it TimeoutSec -- call it Timeout, and don't accept the
           | value "60".
        
             | whycome wrote:
             | Can we call this the microwave paradox
        
         | bambax wrote:
         | Yes! As it is, '1' is ambiguous, as it can mean "True" or '1
         | decisecond', and deciseconds are not a common time division.
         | The units commonly used are either seconds or milliseconds.
         | Using uncommon units should have a very strong justification.
        
         | MrDresden wrote:
         | > _I insist on this all the time in code reviews. Variables
         | must have units in their names if there 's any ambiguity. For
         | example, `int timeout` becomes `int timeout_msec`._
         | 
         | Personally I flag any such use of int in code reviews, and
         | instead recommend using value classes to properly convey the
         | unit (think Second(2) or Millisecond(2000)).
         | 
         | This of course depends on the language, it's capabilities and
         | norms.
        
           | kqr wrote:
           | I agree. Any time we start annotating type information in the
           | variable name is a missed opportunity to actually use the
           | type system for this.
           | 
           | I suppose this is the "actual" problem with the git setting,
           | in so far as there is an "actual" problem: the variable
           | started out as a boolean, but then quietly turned into a
           | timespan type without triggering warnings on user configs
           | that got reinterpreted as an effect of that.
        
         | scott_w wrote:
         | Yes and it's made worse by using "deciseconds," a unit of time
         | I've used literally 0 times in my entire life. If you see a
         | message saying "I'll execute in 1ms," you'd look straight to
         | your settings!
        
         | miohtama wrote:
         | It's almost like Git is a version control system built by
         | developers who only knew Perl and C.
        
         | deltaburnt wrote:
         | Though, ironically, msec is still ambiguous because that could
         | be milli or micro. It's often milli so I wouldn't fault it, but
         | we use micros just enough at my workplace where the distinction
         | matters. I would usually do timeout_micros or timeout_millis.
        
           | thousand_nights wrote:
           | can also do usec for micro
        
           | hnuser123456 wrote:
           | Shouldn't that be named "usec"? But then again, I can
           | absolutely see someone typing msec to represent microseconds.
        
           | seszett wrote:
           | We use "ms" because it's the standard SI symbol. Microseconds
           | would be "us" to avoid the u.
           | 
           | In fact, our French keyboards do have a "u" key (as far as I
           | remember, it was done so as to be able to easily write all SI
           | prefixes) but using non-ASCII symbols is always a bit risky.
        
           | 3eb7988a1663 wrote:
           | ms for microseconds would be a paddlin'. The micro prefix is
           | m, but a "u" is sufficient for easy of typing on an ascii
           | alphabet.
        
         | jayd16 wrote:
         | What would you call the current setting that takes both string
         | enums and deciseconds?
        
       | baggy_trough wrote:
       | Whenever you provide a time configuration option, field, or
       | parameter, always encode the units into the name.
        
       | thedufer wrote:
       | > Now, why Junio thought deciseconds was a reasonable unit of
       | time measurement for this is never discussed, so I don't really
       | know why that is.
       | 
       | xmobar uses deciseconds in a similar, albeit more problematic
       | place - to declare how often to refresh each section. Using
       | deciseconds is fantastic if your goal is for example configs to
       | have numbers small enough that they clearly can't be
       | milliseconds, resulting in people making the reasonable
       | assumption that it must thus be seconds, and running their
       | commands 10 times as often as they intended to. I've seen a
       | number of accidental load spikes originating from this issue.
        
       | NoPicklez wrote:
       | Cool but I don't know why it needs to be justified that it's too
       | fast even for an F1 driver. Why can't we just say its too fast
       | without all the fluff about being a race car driver, the guy
       | isn't even an F1 driver but Le Mans.
        
         | benatkin wrote:
         | The author is someone who went to conferences that DHH also
         | attended, so for some of the audience it's a funny anecdote.
        
         | blitzar wrote:
         | My deoderant is good enough for a F1 driver, why whouldnt my
         | git client adhere to the same standards?
        
       | inoffensivename wrote:
       | Maybe a not-so-hot take on this... The only option this
       | configuration parameter should take is "never", which should also
       | be the default. Any other value should be interpreted as "never".
        
       | kqr wrote:
       | This timeout makes me think about the type of scenario where I
       | know I have mistyped the command, e.g. because I accidentally hit
       | return prematurely, or hit return when I was trying to backspace
       | away a typo. In those situations I reflexively follow return with
       | an immediate ctrl-C, and might be able to get in before the 100
       | ms timeout. So it's not entirely useless!
        
       | newman314 wrote:
       | For reference, Valtteri Bottas supposedly recorded a 40ms!!!
       | reaction time at the 2019 Japanese Grand Prix.
       | 
       | https://www.formula1.com/en/video/valtteri-bottas-flying-fin...
        
         | voidUpdate wrote:
         | Is there a random time between the red lights and the green
         | lights, or is it always the same? Because that feels more like
         | learning the timings than reacting to something
        
           | jsnell wrote:
           | Yes, the timing is random.
        
           | eknkc wrote:
           | No green lights, when the reds go out it is race start but
           | there is a random delay after all reds lighting up and then
           | going off.
        
         | amai wrote:
         | Most probably that was a false start:
         | 
         | "World Athletics rules that if an athlete moves within 100
         | milliseconds (0.1 seconds) of the pistol being fired to start
         | the race, then that constitutes a false start."
         | 
         | https://www.nytimes.com/athletic/5678148/2024/08/03/olympics...
        
           | arp242 wrote:
           | That value has also been criticised as too high.
        
             | legacynl wrote:
             | What is the argument for it being too high?
             | 
             | The argument for it being what it is is the fact that our
             | auditorial processing (when using a starter pistol) or
             | visual processing (looking at start-lights) takes time, as
             | well as transferring that message to the relevant muscles.
             | 100 milliseconds is a pretty good average actually
        
               | arp242 wrote:
               | Basically, some people can consistently respond faster.
               | The 100ms figure just isn't accurate.
               | 
               | I don't have extensive resources/references at hand, but
               | I've read about this a few times over the years.
        
               | legacynl wrote:
               | > I don't have references ... but I've read about this a
               | few times over the years.
               | 
               | Yeah well, I did a psych bsc and I'm telling you that
               | it's impossible.
               | 
               | It's certainly possible for people to do and notice
               | things way faster than that, like a musician noticing a
               | drummer being a few ms off beat, or speedrunners hitting
               | frame perfect inputs, but in those cases the expectation
               | and internal timekeeping is doing most of the heavy
               | lifting.
        
               | Aerroon wrote:
               | It's rhythm vs reaction time. We can keep a much smaller
               | time interval rhythm than we can react at.
        
         | dotancohen wrote:
         | I once had a .517 reaction time in a drag race. You know how I
         | did that? By fouling too late. It was completely unrepeatable.
         | 
         | I'm willing to bet Bottas fouled that, too late (or late
         | enough).
        
       | jakubmazanec wrote:
       | > introduced a small patch
       | 
       | > introduced a patch
       | 
       | > the Git maintainer, suggested
       | 
       | > relatively simple and largely backwards compatible fix
       | 
       | > version two of my patch is currently in flight to additionally
       | 
       | And this is how interfaces become unusable, through thousand
       | small "patches" created without any planning and oversight.
        
         | olddustytrail wrote:
         | Ah, if only the Git project had someone of your talents in
         | charge (rather than the current band of wastrel miscreants).
         | 
         | Then it might enjoy some modicum of success, instead of
         | languishing in its well-deserved obscurity!
        
           | jakubmazanec wrote:
           | Git has notoriously bad CLI (as other commenters here noted).
           | Your snarky comment provides no value to this discussion.
        
             | olddustytrail wrote:
             | On the contrary, it offers a little levity and humour, and
             | possibly even the chance for some self-reflection as you
             | consider why you thought it was appropriate to insult the
             | folk who manage Git. I'm sure you can manage at least one
             | of those?
        
               | jakubmazanec wrote:
               | Your comment isn't funny, just snarky. I suggest you read
               | again HN guidelines and do some reflection yourself.
               | 
               | Also, if you see it as insult, that's your mistake. It is
               | just a simple empirical observation. I'm not saying it's
               | an original thought - feel free to Google more about this
               | topic.
               | 
               | I won't waste any more time since you obviously aren't
               | interested in discussion.
        
               | Ylpertnodi wrote:
               | >I won't waste any more time since you obviously aren't
               | interested in discussion.
               | 
               | Pot. Kettle. Black.
        
       | ocean_moist wrote:
       | Fun fact: Professional gamers (esport players) have reaction
       | times around 150ms to 170ms. 100ms is more or less impossible.
        
       | bobobob420 wrote:
       | Git autocorrect sounds like a very bad idea.
        
       | rossant wrote:
       | First time I hear about deciseconds. What a strange decision.
        
       | outside1234 wrote:
       | Regardless of the delay time, this just seems like an incredibly
       | bad idea all around for something as important as source control.
        
       | bun_terminator wrote:
       | clickbait, don't hide the truth in a pseudo-riddle
        
       | krab wrote:
       | What if this was an intentional, yet overly clever, way to avoid
       | one special case?
       | 
       | I mean, for all practical purposes, the value of 1 equals to the
       | unconditional execution.
        
       ___________________________________________________________________
       (page generated 2025-01-20 23:02 UTC)