[HN Gopher] Why tail-recursive functions are loops
       ___________________________________________________________________
        
       Why tail-recursive functions are loops
        
       Author : speckx
       Score  : 54 points
       Date   : 2025-08-08 15:10 UTC (3 days ago)
        
 (HTM) web link (kmicinski.com)
 (TXT) w3m dump (kmicinski.com)
        
       | hinkley wrote:
       | Practically the day after I learned about tail recursion in CS
       | class, I learned that almost all recursive calls can be
       | translated to iteration, that in many cases the iterative version
       | is easier to scan, is as fast if not faster, and that they can
       | usually handle much much larger inputs than recursion due to
       | avoiding stack overflow.
       | 
       | Tail recursion is meant to fix the latter. But what we mean to
       | happen and what actually happens ain't ever exactly similar.
       | 
       | Tail recursion IME is a bigger foot gun than relying on someone
       | to add a new conditional branch at the end of a block in an
       | iterative algorithm without fucking it up in the process. And
       | iteration responds generally better to Extract Function. And
       | while I can think of counter cases easily enough, in the large
       | iteration is less work and less vigilance. And you cannot scale a
       | project up without the vigilance requirement amortizing basically
       | to 0 per line of code.
        
         | tombert wrote:
         | In my mind, the biggest advantage to using tail recursion over
         | vanilla loops is the ability to keep using persistent data
         | structures without any (apparent) mutation.
         | 
         | At least in theory, a tail recursive call will be converted
         | into a dumb jump, so there shouldn't be a performance penalty,
         | but since from your code's perspective you're passing in the
         | stuff for the next iteration, you can keep using the pretty and
         | easy-to-reason-about structures without creating any kind of
         | mutable reference.
         | 
         | I'm not 100% sold on tail recursion in a broader sense, but at
         | least with Clojure's loop/recur stuff it is kind of cool to be
         | able to keep using persistent data structures across iterations
         | of loops.
        
           | weitendorf wrote:
           | What makes tail recursion "special" is that there exists a
           | semantically equivalent mutable/iterative implementation to
           | something expressed logically as immutable/recursive. [0]
           | 
           | Of course, this means that the same implementation could also
           | be directly expressed logically in a way that is
           | mutable/iterative.
           | 
           | func pow(uint base, uint n): n == 0 ? return 1 : return n *
           | pow(base, n-1)
           | 
           | is just
           | 
           | func pow(uint base, uint n): uint res = 1; for(i=0; i<n;
           | i++){ res *= n} return res
           | 
           | There is no real "advantage" to, or reason to "sell" anybody
           | on tail call recursion if you are able to easily and clearly
           | represent both implementations, IMO. It is just a
           | compiler/runtime optimization, which might make your code
           | more "elegant" at the cost of obfuscating how it actually
           | runs + new footguns from the possibility that code you think
           | _should_ use TCO actually not (because not all immutable +
           | recursive functions can use TCO, only certain kinds, and your
           | runtime may not even implement TCO in all cases where it
           | theoretically should).
           | 
           | As an aside, in C++ there is something very similar to TCO
           | called copy-elision/return-value-optimization (RVO): [1]. As
           | with TCO it is IMO not something "buy into" or sell yourself
           | on, it is just an optimization you can leverage when
           | structuring your code in a way similar to what the article
           | calls "continuation passing style". And just like TCO, RVO is
           | neat but IMO slightly dangerous because it relies on implicit
           | compiler/runtime optimizations that can be accidentally
           | disabled or made non-applicable as code changes: if someone
           | wanders in and makes small semantic to changes to my code
           | relying on RVO/TCO for performance they could silently break
           | something important.
           | 
           | [0] EXCEPT in practice all implementation
           | differences/optimizations introduce observable side effects
           | that can otherwise impact program correctness or semantics.
           | For example, a program could (perhaps implicitly) rely on the
           | fact that it errors out due to stack overflow when recursing
           | > X times, and so enabling TCO could cause the program to
           | enter new/undesirable states; or a program could rely on a
           | functin F making X floating point operations taking at least
           | Y cycles in at least Z microseconds, and not function
           | properly when F takes less than Z microseconds after enabling
           | vectorization. This is Hyrum's Law [2].
           | 
           | [1] https://en.wikipedia.org/wiki/Copy_elision#Return_value_o
           | pti...
           | 
           | [2] https://www.hyrumslaw.com/
        
             | gleenn wrote:
             | I would argue having the parameters that change during the
             | loop be explicit is a very nice advantage. Agree that the
             | things can be equivalent in terms of execution but the
             | readability and explicitness, and the fact that all the
             | parameters are listed in the same place is great.
        
           | ramchip wrote:
           | I think you're confusing mutation and variable reassignment?
        
           | charcircuit wrote:
           | >without creating any kind of mutable reference.
           | 
           | The parameter essentially becomes a mutable reference.
        
         | CalChris wrote:
         | I must have missed this class. How does one convert a recursive
         | descent parser into an iterative one?
        
           | pmg101 wrote:
           | https://news.ycombinator.com/item?id=44837949
        
           | jandrese wrote:
           | By making your own stack and using loops.
        
           | kylec wrote:
           | You can do it by managing your own stack, but I would argue
           | that doing so makes the algorithm LESS easy to scan and has
           | its own sets of problems.
        
           | electroly wrote:
           | Same way as any other algorithm: with an explicit heap-
           | allocated stack. I have a naive parser where I built operator
           | precedence into the grammar as nested productions instead of
           | using an algorithm like shunting yard. An input of
           | ((((((1)))))) blew the stack. Converted it to an explicit
           | stack and it was fine; deep for the call stack was not deep
           | for my own heap-allocated stack. Nasty code, though--I think
           | this serves as one of the counterexamples to the idea that
           | the recursive code gets simpler when turning it into
           | iteration. The original OP was talking more specifically
           | about _tail_ recursion, which a recursive descent parser won
           | 't be.
        
             | bjoli wrote:
             | There are many languages out there that can grow the stack
             | these days. If you have a language that can do tail
             | recursion, it is really a very neat complement. In lisps it
             | means being able to write a list building function in a
             | direct consing way, and still being faster than the TCP-
             | way.
        
           | Jtsummers wrote:
           | Every recursive algorithm can be made into an iterative
           | algorithm if you use an explicit stack instead of the
           | implicit call stack. It's not always cleaner.
           | 
           | In tail recursive algorithms, there is no stack, it's just a
           | straight loop.                 def foo(state, acc): # if acc
           | is needed         if base-condition(state): return acc
           | return foo(next(state), f(state, acc))
           | 
           | Is simply:                 def foo(state):         acc =
           | initial         while not base-condition(state):
           | acc = f(state, acc)           state = next(state)
           | return acc
           | 
           | If it's not tail recursive you introduce a stack, for
           | instance a DFS on a binary tree:                 def
           | search(node, val):         if node is None: return False #
           | empty tree, only need this check once         stack = [node]
           | while stack:           n = stack.pop()           if n.val ==
           | val: return True           if n.right: stack.push(n.right)
           | if n.left:  stack.push(n.left)         return False
           | 
           | Note the insertion order is reversed from the recursive calls
           | in a typical DFS. We want left to be searched first and then
           | its children and then we "recur" back to right and deal with
           | its children, so we need to push right into the stack first
           | and then left.
           | 
           | When you have multiple mutually recursive functions (as is
           | likely the case in a recursive descent parser) then things
           | become more complicated, but it's still feasible.
        
             | LegionMammal978 wrote:
             | Sometimes the messy translation into an explicit stack and
             | dispatch loop is necessary, if you want to pause the
             | calculation, serialize the current state, and reconstitute
             | it later. (E.g., if you want to add disk-saved checkpoints,
             | a frequent hassle in some of my projects.) Coroutines can
             | get you the equivalent of pausing and resuming for a
             | recursive routine, but I'm not aware of any language that
             | lets you serialize the call stack.
        
               | Jtsummers wrote:
               | > I'm not aware of any language that lets you serialize a
               | whole call stack.
               | 
               | That's basically what continuations provide. Scheme, SML,
               | and others provide them.
        
               | LegionMammal978 wrote:
               | Continuations allow an inactive call stack to sit around
               | in memory. But do any of those languages let you save a
               | continuation to a file and resume it in another program,
               | short of massive contortions to the code?
        
             | fellowniusmonk wrote:
             | When TCO recursion was first developed it was very
             | explicitly called out as a syntactic and structurally
             | improved GOTO but still fundamentally a GOTO that could
             | take params.
             | 
             | Recursion isn't physically real, any book that teaches the
             | abstraction before explaining either the call stack (for
             | non-TCO recursion) or in the GOTO context is a book
             | actively trying to gatekeeper CS and weed out readers. (Not
             | that any of those old pascal turbo/boreland books from the
             | 90s actually shipped code that compiled.)
             | 
             | I had several people on HN of all places try to "teach me"
             | recursion after this simple line inside a larger comment:
             | 
             | > It's like acting like recursion is physically real (it's
             | not) when it smuggles in the call stack.
             | 
             | Recursion IS real abstractly. It's just not physically
             | real, it was developed before we knew how DNA/RNA encoding
             | handles the same issues in a more performant way.
        
         | javcasas wrote:
         | Recursion deals with recursive data structures, and iteration
         | deals with "plain old" data structures.
         | 
         | When you use one to process the other, you get into trouble.
         | For example, you need to manage a stack to do iteration on your
         | binary tree. Or you need to construct slices to recurse on your
         | arrays.
         | 
         | It's all about ergonomics.
        
           | dreamcompiler wrote:
           | This is overly simplistic.
           | 
           | Is a binary tree recursive from the perspective of a type
           | declaration? Yes.
           | 
           | Is it recursive from the point of view of search? Depends. If
           | you're doing depth-first search, then you need a stack and a
           | recursive algorithm seems natural. If you're doing breadth-
           | first search, then you need a queue and the algorithm is less
           | obviously recursive.
           | 
           | In either case the program could be expressed recursively or
           | as an explicit loop. When a stack is needed, the underlying
           | hardware always provides automatic stack management so
           | recursion feels natural in that case.
           | 
           | When a queue is needed it's more of a tossup since hardware
           | and programming languages don't generally provide automatic
           | queue management, so you're going to need to manage that data
           | structure manually anyway. In which case whether you choose
           | to use recursion or not is more of a personal preference.
        
             | javcasas wrote:
             | So a tree is easier to do recursing vs iterating in some
             | cases, whereas in other cases recursion and iteration have
             | roughly the same level of complexity.
             | 
             | Kinda shows that recursive data structures are usually
             | better dealt with recursion. Worst case you end up with the
             | same level of difficulty as iteration.
        
         | Spivak wrote:
         | I'm in your camp, recursive code is hard for the next reader,
         | which might be you, to understand. It's a bit too magic looking
         | for my taste. If you're looping it should look like a loop, if
         | you're pushing onto a stack you should get to see the stack.
         | It's also safe against modifications which might silently break
         | the tail-call nature of it until you blow out the stack later.
         | It also gives you much saner and more debuggable stack traces
         | because all the state is in the current frame.
        
         | bjoli wrote:
         | I am in the other camp. I prefer tail recursion and recursion
         | over loops. However: For the simple cases it can and should
         | probably be abstracted away like the racket for loops or my own
         | goof-loop [1].
         | 
         | I just feel that a recursive calls makes state much more clear,
         | but then again I am no fan of mutation in general. In my old
         | python days I think a good 60% of my bugs were due to me being
         | bad at managing state.
         | 
         | [1] https://rikspucko.koketteriet.se/bjoli/goof-loop
        
       | tombert wrote:
       | I'm a pretty big functional programming nerd and I _want_ to like
       | tail recursion, but I honestly kind of feel like I agree with
       | Guido on it, which is that it kind of breaks the typical stack-
       | trace patterns.
       | 
       | I have kind of grown to prefer Clojure's loop/recur construct,
       | since it gives you something more or less akin to tail recursion
       | but it doesn't pretend to be _actually_ recursive.
        
         | dreamcompiler wrote:
         | Clojure does it this way because the JVM stupidly doesn't
         | support tail call optimization.
         | 
         | It is true that TCO messes up your stack traces. It is also
         | true that loops mess up your stack traces, because they don't
         | even create a stack trace. In many languages that support TCO,
         | TCO can be turned off for debugging and enabled for production,
         | so Guido's characterization of the issue is ridiculous.
        
       | munchler wrote:
       | Every developer should know how to write a tail-recursive
       | function and understand why it is equivalent to a loop. That
       | said, tail recursion is rarely needed in modern functional
       | programming. For example, why write out a recursive function when
       | a call to `fold` or `map` will do the same thing?
        
         | xdavidliu wrote:
         | it's not entirely true that it does the same thing: even if it
         | gives the same result. For many programming languages, fold and
         | map can only act on non-lazy data structures, so require O(N)
         | memory for the data that needs to be folded over, while tail-
         | recursive functions usually only use O(1) memory, even stack
         | memory.
         | 
         | Notable exceptions to the above are python3 with generators,
         | which I believe truly use O(1) memory with map and fold.
         | Haskell has lists that are lazy by default, but if you fold or
         | map over them, it still "forces the thunk" for each element and
         | thus you still end up using O(N) memory.
        
         | Eji1700 wrote:
         | > For example, why write out a recursive function when a call
         | to `fold` or `map` will do the same thing?
         | 
         | Yeah this was a big help when I started F#. Basically "if
         | you're using the rec keyword, you're probably missing
         | something" and hell that even goes for a lot of uses of fold,
         | from the beginners perspective.
        
         | kmicinski wrote:
         | I agree with you--that's a topic I will definitely cover in my
         | blog, too. You make a good point: I know some folks who worked
         | at big financial orgs, writing hundreds of thousands of lines
         | of code, and _never_ wrote general-recursive functions (only
         | used simple recursors like foldl).
        
       | aryonoco wrote:
       | For what it's worth, in F#, tail recursive functions are turned
       | into equivalent loops by the .NET CLR.
       | 
       | I actually like the compromise. I get to write safe functional
       | code while getting all the benefits of a highly optimised
       | iterative operation.
        
       | username3 wrote:
       | > I wrote up a detailed blog post about tail call optimization in
       | Elixir/Erlang and its performance. The TLDR; sort of is that none
       | tail call optimized recursive functions (body-recursive) can be
       | faster and more memory efficient than TCO functions. This is
       | something that I never thought before, that TCO is always faster
       | seems to be a common misconception. My example is also not
       | contrived - it's an implementation of map.
       | 
       | https://pragtob.wordpress.com/2016/06/16/tail-call-optimizat...
        
       | wagwang wrote:
       | Normal recursion is just a loop and a stack, turns out, if you
       | can optimize recursion without a stack, it just becomes a loop.
        
       | mehulashah wrote:
       | Tail recursion is beautifully deep and simple. It (and as a
       | corollary CPS) makes clear what state matters to your loop (and
       | function) and avoids extraneous variables in loops as well as
       | implicit unneeded state in functions. It also makes it easier to
       | show the correctness of your loops. Sure, there are other
       | functional language constructs like fold and map, if your problem
       | is amenable to them. Tail recursion is more general and simpler.
        
       ___________________________________________________________________
       (page generated 2025-08-11 23:00 UTC)