[HN Gopher] Total Functional Programming (2004) [pdf]
       ___________________________________________________________________
        
       Total Functional Programming (2004) [pdf]
        
       Author : leonidasv
       Score  : 83 points
       Date   : 2024-03-07 12:07 UTC (10 hours ago)
        
 (HTM) web link (ncatlab.org)
 (TXT) w3m dump (ncatlab.org)
        
       | lynx23 wrote:
       | 20 year old paper. People interested in total functions should
       | probably have a good look at Idris.
        
       | FrustratedMonky wrote:
       | Isn't this handled by 'purity' of functions?
       | 
       | It seems like the big idea here is that Int->Int always, no
       | errors, faults, 'impact by extra data'.
       | 
       | Isn't that all also solved by pure functions? And is incorporated
       | into functional languages at this point?
        
         | gpderetta wrote:
         | the big Idea is that the language is non-Turing complete and
         | termination can be proven (and co-data is used for non-
         | terminating programs).
        
           | zarathustreal wrote:
           | I've had an idea for a language like this for a while now
           | that wouldn't exactly "type check" in the sense that the
           | unification finishes but that the "type checking" would be
           | continuously running in the background as you're programming.
           | This co-data approach seems interesting toward that goal
        
             | anon291 wrote:
             | Isn't this just Idris, Haskell, Agda, etc with LSP
             | integration? IIRC, Agda has always had excellent editor
             | integration, and Idris too.
        
               | zarathustreal wrote:
               | I'm not too familiar with any of those languages but I'm
               | guessing the difference is that type checking does
               | eventually "finish" in those languages and that to
               | accomplish this, the type systems are necessarily _not_
               | Turing-complete, whereas I'm suggesting the alternative.
               | Type checking in my idea would actually produce artifacts
               | that you'd store with your code essentially saving the
               | state of type checking up to whatever bounds had been
               | reached since the last code change. Kind of like a
               | combination of type checking and unit testing that takes
               | advantage of pure functional programming
        
               | zozbot234 wrote:
               | But "type checking" that's Turing complete is not type
               | checking at all, it's just a phase distinction. It's no
               | different than choosing whether to run code at "compile"
               | vs. "run" time, except that in practice, we don't even
               | want to have fully Turing-complete compile workflows that
               | might, e.g. not terminate. Dependently typed languages
               | implement this phase distinction via program extraction:
               | you "extract" the code to be fed into the "runtime" phase
               | of execution.
        
         | tromp wrote:
         | No; purity is an orthogonal notion. Haskell functions are pure
         | in that given the same arguments, a function always produces
         | the same output. But Haskell functions can be partial; for some
         | arguments there is no output, or the output is undefined.
         | 
         | Turner proposes a language for pure total functions.
        
         | anon291 wrote:
         | > It seems like the big idea here is that Int->Int always, no
         | errors, faults, 'impact by extra data'.
         | 
         | Well no. This is computer science 101. Anything that can
         | compute anything useful (ie. Turing complete or lambda-
         | calculus-complete... they're equivalent) cannot be said to
         | produce an answer in finite time (or at all really). In FP
         | parlance, we say that such functions are a 'bottom' value. The
         | notation 'Int -> Int' in a language like haskell is a function
         | that takes an Int and returns an Int, _if_ it terminates.
         | 
         | In a total language, you're guaranteed to have an Int after
         | invoking the function within a finite amount of time, even if
         | that time is greater than the age of the universe. But it will
         | eventually return an answer. Whereas a haskell program may
         | truly never terminate.
         | 
         | Here is an example of a haskell program that is syntactically
         | correct and well-typed but is actually a bottom, despite the
         | type signature claiming in returns Int                  f ::
         | Int -> Int        f x = f x
        
           | FrustratedMonky wrote:
           | Guess, that is what I was asking.
           | 
           | If you take any current functional language, and add the
           | criteria that it must only use pure functions, then Int->Int
           | is the same as described in the Total Language. Because if
           | there are no outside interactions inside the function, it's
           | 'pure', then the Int->Int is guaranteed.
           | 
           | Doesn't using only pure functions, equate to this Total
           | Functions.
        
             | FrustratedMonky wrote:
             | This might be one of those 'sloppy' definitions.
             | 
             | From random search, seems like some people assume this :
             | Pure=Total
             | 
             | "A pure function must be total Firstly, a function must be
             | total, this means that for each input that is provided to
             | the function there must be a defined output."
        
             | anon291 wrote:
             | > If you take any current functional language, and add the
             | criteria that it must only use pure functions, then
             | Int->Int is the same as described in the Total Language.
             | 
             | The function above is pure. In fact, all functions in
             | Haskell are, even those returning IO.
        
               | FrustratedMonky wrote:
               | Then they are also 'total' by some definitions.
               | 
               | So they would match the criteria of the original post?
               | 
               | By some definitions, to be 'pure' includes being 'total'.
               | 
               | Guess, back to original question. The ideas in this paper
               | from 2004, seems to have been incorporated into more
               | common languages like Haskell in 2024.
        
         | lynx23 wrote:
         | No. Haskell has this notion of what they call "bottom", more
         | easily remembered as what happens if you call undefined. The
         | issue is, your function can be as pure as it wants, as long as
         | you are able to write "f :: T -> T; f = undefined" all your
         | purity based analysis goes to hell.
        
           | FrustratedMonky wrote:
           | Think original question I had, was if you make "f =
           | undefined" be Pure. Don't leave undefined. It has a defined
           | signature Int->Int. Then "f" should return Int every time
           | guaranteed, and would be equivalent of the Total Functional
           | case.
        
             | lynx23 wrote:
             | Even if undefined were not allowed, you'd still be able to
             | write "f :: Int -> Int; f x = f x" which would never return
             | an Int. Totality is about getting a grip on these cases. As
             | mentioned more elquently then I could in this thread, it
             | has been proven long time ago that termination-analysis is
             | impossible to achieve. What Idris for instance does is to
             | use proofs, adn to an extend heuristcs, to stil decide in
             | special cases that a function is total, IOW terminates.
             | Thats a pretty cool part of CS, since we're basically
             | knowing something is impossible but still make small steps
             | towards tackling the issue at least for some cases we care
             | about.
        
       | btreecat wrote:
       | > to make programming more closely related to mathematics
       | 
       | Why is this taken as a given to be a good thing? Our hardware has
       | state. Any IO is a side effect, so what's the non trivial use
       | case for "pure functions" all the way down?
       | 
       | I think FP paradigms can be useful in specific problem domains,
       | I'm skeptical of the general utility (vs resource spend) of
       | strong FP languages.
        
         | diggan wrote:
         | As you say, you cannot make a 100% side-effect free program,
         | unless it doesn't do anything at all.
         | 
         | But I do think it's valuable to aim for as side-effect-free as
         | possible, and concentrate the statefulness into certain
         | part(s), rather than spread out all over the place, which is
         | the typical way typical.
        
           | btreecat wrote:
           | Practicing software architecture should generally lead you to
           | minimize side effects and efficiently manage sections of
           | state small enough to fit into your working memory IMO.
           | 
           | FP can be used to get you there, but it strongly optimizes
           | for readability of experts in most cases which harms
           | readability in general, as there are always fewer experts.
           | 
           | At least that's been my experience with FP languages in the
           | professional space. People often underestimate the benefits
           | of good tooling, a large community, and deep candidate pool.
        
             | epgui wrote:
             | > it strongly optimizes for readability of experts in most
             | cases which harms readability in general
             | 
             | Strongly disagree. In the way you speak of "readability",
             | you're confusing simplicity and familiarity, which are
             | really two different things.
             | 
             | If FP if not "familiar" to you, then it will require some
             | effort on your part, sure. But you can teach FP to an
             | absolute beginner, and they will likely find it just as
             | simple, if not simpler, than an imperative mode of
             | reasoning. It's basically just a social and cultural issue
             | at this point.
        
               | sodapopcan wrote:
               | Very much so to all of this. When I started programming I
               | was just messing around and doing so as more of a
               | curiosity than anything. What I came up with on my own
               | was much closer to FP. When I started to really learn and
               | was presented with OO, it took me what felt like a really
               | embarrassingly long time to under how classes worked and
               | why I'd want them. Once I grokked it I had a grand ol'
               | time, but then I learned a functional language and it
               | felt so good to learn an actual well-designed version of
               | how I initially imagined programming should be :)
        
           | lupire wrote:
           | No programming languages have side effects. Side effects are
           | introduced by the runtime.
           | 
           | All programming languages have side effects. The question is
           | what logical constraints limit them.
        
           | garethrowlands wrote:
           | Absolutely.
           | 
           | It's often helpful to distinguish between effects and side-
           | effects. The idea is that an effect is deliberate and
           | explicit, such as, say, sending an email or moving a robot's
           | arm. Whereas a side-effect is one that happens but wasn't
           | your explicit objective - heating the CPU, say, or silently
           | caching a value in thread-local storage. Haskell programs
           | often use the type system to make these kinds of distinction.
        
         | mightybyte wrote:
         | There are many situations where the extra constraints on pure
         | functions are very useful. Pure functions can't segfault. They
         | can't overwrite memory used by other parts of your program due
         | to stray pointer references. They can't phone home to Google.
         | Verification and optimization is often much simpler when you
         | know that some functions are pure. Reactive programming
         | frameworks that execute callbacks in response to events can
         | cause really weird bugs if the callback mutates state
         | somewhere. I could go on and on. Nobody is arguing for pure
         | functions all the way down. A program's top-level functions
         | will usually be side-effecting, and then you can call out to
         | pure code any time. In practice, a surprising amount of
         | application logic can end up being pure functions.
        
           | thfuran wrote:
           | >Reactive programming frameworks that execute callbacks in
           | response to events can cause really weird bugs if the
           | callback mutates state somewhere
           | 
           | What's the point of a callback that doesn't mutate state
           | somewhere?
        
             | andyferris wrote:
             | A lot of "dependency injection" type frameworks run this
             | way.
             | 
             | The framework or runtime calls your function, and the
             | return value tells it what to do next.
             | 
             | Prominent examples include React and Excel.
        
             | mightybyte wrote:
             | It allows you to separate non-stateful business logic from
             | the stateful update logic. Reactive frameworks usually
             | handle the state mutation for you, eliminating the
             | potential for UI glitches where something that depends on
             | the updated state doesn't get recomputed/redrawn.
        
         | user3939382 wrote:
         | I'd rather a language is designed to accommodate the human
         | mind, and work back from there through the compiler to figure
         | out how to map that to the capabilities of the hardware, than
         | the reverse.
         | 
         | Speaking for myself, when I'm mentally modeling how I logically
         | work through a problem, which concepts I've allocated, used,
         | and freed are not a part of my thought process.
        
           | dartos wrote:
           | That's highly dependent on the kind of software you write.
           | 
           | Some software necessitates the programmer thinking about
           | memory usage and optimization (games, simulations, machine
           | learning, etc)
        
             | zarathustreal wrote:
             | Thinking about memory usage and optimization is the role of
             | the compiler. The fact that some software necessitates the
             | programmer thinking about it is just a consequence of
             | current technology.
        
               | dartos wrote:
               | I don't think that's true.
               | 
               | There are some cases where novel memory layouts are
               | necessary. Especially in graphics programming where you
               | need to, at times, cleverly lay out memory to use the
               | most of every available buffer to create arbitrary visual
               | effects.
               | 
               | Compilers can't automatically create novel layouts.
        
               | lupire wrote:
               | That's 0.00001% of the effort of programming.
               | 
               | You also need custom hardware sometimes, like TPUs. That
               | doesn't make software unimportant.
        
               | AnimalMuppet wrote:
               | We've been waiting for a "sufficiently advanced compiler"
               | for a very long time. If you're going to write programs
               | for real computers, using real compilers, sometimes you
               | have to care about such things. (If you're going to write
               | programs for hypothetical computers, using hypothetical
               | compilers, such constraints don't apply to you. Have fun
               | running them, though...)
        
             | epgui wrote:
             | That is absolutely and totally true, but 99.9999% of
             | engineers are probably working on things like web apps or
             | other high-level code.
             | 
             | The argument for FP was never that it was perfectly well-
             | suited to every application-- merely that it was better-
             | suited for your "typical" engineering tasks of writing
             | business logic. FP, proof languages, and logic programming,
             | are also all uniquely well-suited in situations that
             | require correctness and formal verification (safety,
             | aerospace, testing, etc).
             | 
             | To make a very broad, sweeping statement: if your project
             | has a `node_modules` or a `venv`, then FP is the medicine
             | you didn't know you needed. If you're writing low-level
             | metal code / assembly, then you should already know you're
             | not the target audience.
        
               | dartos wrote:
               | The software world is larger and more diverse than you
               | think.
               | 
               | Maybe on HN 99% of people work on web.
               | 
               | In reality probably like 70%
        
           | jjav wrote:
           | > I'd rather a language is designed to accommodate the human
           | mind
           | 
           | Agreed, but is FP that?
           | 
           | When I write instructions to someone who does not know how to
           | do something (like for my young kid or an aging parent on
           | using some tech they're no familiar with) it will be strictly
           | procedural. Step 1 do this, then step 2 do that, etc.
        
         | dpratt71 wrote:
         | If you have a goal of being able to confidently reason about
         | your code, such as to be confident it won't go "wrong", then
         | making it more math like would seem to make a lot of sense.
         | 
         | As far as the IO thing is concerned, one could use pure
         | functions to construct a model of an imperative program. Then
         | you just need something to run it... There is a sense in which
         | this is exactly what is done in practice.
        
           | AnimalMuppet wrote:
           | > If you have a goal of being able to confidently reason
           | about your code, such as to be confident it won't go "wrong",
           | then making it more math like would seem to make a lot of
           | sense.
           | 
           | Is it easier to reason about a 10,000-line-long math proof,
           | or a 10,000-line-long program? I'm not sure that the math is
           | actually easier.
           | 
           | Is it easier to write a fully-correct 10,000-line-long math
           | proof, or a 10,000-line-long program? Again, I'm not sure
           | that the math is actually easier.
           | 
           | Is it easier to write a formal prover for the math? Almost
           | certainly. And maybe, with a formal prover, writing the
           | correct math is easier...
        
             | mrkeen wrote:
             | > Is it easier to reason about a 10,000-line-long math
             | proof, or a 10,000-line-long program? I'm not sure that the
             | math is actually easier.
             | 
             | Don't compare them on the basis of familiarity. Making your
             | programming language look mathy is not the point.
             | 
             | I'd offer a different comparison:
             | 
             | Reason about a 10,000-line-long math proof, or a
             | 10,000-line-long math proof _in which there are
             | instructions to jump back-and-forth with your pencil,
             | crossing out and updating previously-calculated values in-
             | place_.
        
             | foobarchu wrote:
             | You don't do a single proof of the entire 10k line program,
             | you do proofs of the many small functions that make it up.
             | If you can use each proof as an axiom going into proving
             | the functions that use it, then each of them becomes easier
             | to prove too. If I have a function that might sometimes
             | mutate a variable off in another part of the program,
             | that's just inherently harder to reason about that one that
             | is idempotent and guaranteed to have no side effects.
        
             | garethrowlands wrote:
             | The most common proofs are those that the compiler
             | constructs while type checking. The compiled program is the
             | proof witness. All compilers can be thought of as producing
             | a proof-by-construction but they vary wildly in the
             | propositions they can prove.
             | 
             | If this idea is new to you and you want to learn more,
             | google "propositions as types" or "the curry howard
             | correspondance".
        
         | alipang wrote:
         | Well, why would it be taken as a given that programming should
         | mimic the hardware?
         | 
         | Pure functions are easier to reason about (there may be
         | exceptions of course), that's why they're interesting.
         | 
         | This paper is not related to side-effects - it's related to
         | "totality", meaning all programs terminate. This could also be
         | interesting for side-effecting programs, but it's of course
         | much harder to verify in this case.
         | 
         | Due to the halting problem's existance a total language can not
         | be Turing complete, but many useful programs can be verified.
         | 
         | I didn't see anything in the paper that claims that pure
         | functions should be "all the way down", and the paper is not
         | about side effects anyway.
        
           | eru wrote:
           | A slight tangent: there's also a related notion to
           | termination, that allows you to describe event loops.
           | 
           | Basically it's about the loop making progress on every
           | iteration, but we can have an infinite number of iterations.
           | 
           | I think the distinction is related to the one between data
           | and co-data.
        
           | wredue wrote:
           | >Well, why would it be taken as a given that programming
           | should mimic the hardware?
           | 
           | Because that's the environment that programs are run on,
           | doing anything else is fighting against the environment. I
           | would argue that humans have done this to great effect in
           | some areas, but not in programming.
           | 
           | >Pure functions are easier to reason about (there may be
           | exceptions of course), that's why they're interesting.
           | 
           | Prove it.
        
             | shrimp_emoji wrote:
             | > _doing anything else is fighting against the environment_
             | 
             | Video game programming, where performance really matters,
             | is a great way to see the cost of forcing the hardware to
             | deal with human abstractions (mainly OOP). Rules like
             | "never have a Boolean in a struct" or "an array with linear
             | access can be faster than a tree with log access" wake you
             | up to the reality of the hardware. :p
             | 
             | In academia, and the especially academic, utopian realm of
             | functional programming, you're trained to live in
             | dreamland.
             | 
             | If you can afford it, though, hey, it's a nice place to be.
        
               | wredue wrote:
               | Andrew Kelley has a pretty good talk on gaining
               | performance in the zig compiler.
               | 
               | In many cases performing math again is faster than
               | memoization.
               | 
               | General gist is to try to cram as much in to the cache
               | lines as possible, sometimes even at the "cost" of
               | calculating values again.
        
               | sodapopcan wrote:
               | > In academia, and the especially academic, utopian realm
               | of functional programming, you're trained to live in
               | dreamland.
               | 
               | OO outside of contexts where every little bit of
               | performance matters suffers in exactly the exact same
               | way.
               | 
               | > If you can afford it, though, hey, it's a nice place to
               | be.
               | 
               | No arguments there! A huge majority of applications can
               | afford to be written this way, even ones where
               | performance is a concern (WhatApp, for example).
        
               | jjav wrote:
               | > A huge majority of applications can afford to be
               | written this way, even ones where performance is a
               | concern
               | 
               | This is sometimes true for any one given app but it's not
               | a good overall outcome.
               | 
               | It is why we have today multi-GHz CPUs with lots of cores
               | and dozens of GB of RAM and yet... most actions feel less
               | responsive today than in 1995 with a 120MHz CPU, 1 core
               | and 1MB.
        
               | sodapopcan wrote:
               | My comment was in response to the need to squeeze out
               | every last bit of performance you possibly can. You're
               | talking about ignoring performance altogether which is
               | not what I'm talking about.
        
               | garethrowlands wrote:
               | Then again, abstractions can be helpful too, including in
               | game programming. Epic's heavily invested, for example.
               | Or in databases, relational algebra often beats out an
               | array with linear access. I agree that OOP-in-the-small
               | lacks mechanical sympathy though. That's one reason for
               | entity-component-model, though another, I'd argue, is
               | that it provides abstractions that are easier to reason
               | about.
        
             | madmax96 wrote:
             | >Prove it.
             | 
             | Proofs are good evidence that pure functions are easier to
             | reason about. Many proof assistants (Coq, Lean, F*) use the
             | Calculus of Inductive Constructions, a language that only
             | has pure, total functions, as their theoretical foundation.
             | The fact that state of the art tools to reason about
             | programs use pure functions is a a pretty good hint that
             | pure functions are a good tool to reason about behavior. At
             | least, they're the best way we have so far.
             | 
             | This is because of referential transparency. If I see `f n`
             | in a language with pure functions, I can simply lookup the
             | definition of `f` and copy/paste it in the call site with
             | all occurrences of `f`'s parameter replaced with `n`. I can
             | simplify the function as far as possible. Not so in an
             | imperative language. There could be global variables whose
             | state matters. There could be aliasing that changes the
             | behavior of `f`. To actually understand what the imperative
             | version of `f` does, I have to trace the execution of `f`.
             | In the worst case, __every time__ I use `f` I must repeat
             | this work.
        
         | llm_trw wrote:
         | Because a stateless program can execute in only one way for any
         | given inputs.
         | 
         | A stateful program can execute in 2^n different ways for each
         | bit of state for any given inputs.
        
           | mrkeen wrote:
           | > 2^n
           | 
           | If it only computes from an expression to an expression. I
           | think the bound might be higher if you use imperative
           | statements (/effects).
           | 
           | You can fill the whole 2^n output space by calling random().
           | But in addition to that, the program might also delete an
           | arbitrary file, or deadlock, etc.
        
         | epgui wrote:
         | > Why is this taken as a given to be a good thing?
         | 
         | It is not taken as a given at all. It is an acknowledgment that
         | it makes perfect sense to build on top of centuries of work and
         | theory on well-behaved tools, rather than reinventing the wheel
         | with ill-behaved tools.
        
           | foobarchu wrote:
           | It's honestly wild to see this even being argued about in
           | 2024. Ten years ago, I remember the discussions around FP
           | being universally "yes of course it's better, but what's the
           | best way to make it practical in the real world where you
           | need side effects".
        
             | pjmlp wrote:
             | There are folks that think they can still reason about code
             | being executed on 128 cores (AMD EPYC(tm) 9004).
        
               | epgui wrote:
               | I mean it's pretty trivial if you have referential
               | transparency, which is what FP gives you.
        
               | pjmlp wrote:
               | Indeed, that is the point I am making.
        
         | anon291 wrote:
         | > Why is this taken as a given to be a good thing?
         | 
         | Total programming is not useful for everything. Then again,
         | neither is assembly language, or C++, or Rust, or Python. Each
         | has a particular use case in which they are an appropriate
         | tool, and many use cases in which they are not. The job of an
         | engineer is to figure when each tool is appropriate.
         | 
         | Very good engineers keep tabs on lots of tools so that they can
         | choose the best tool when the time calls.
         | 
         | > I think FP paradigms can be useful in specific problem
         | domains, I'm skeptical of the general utility (vs resource
         | spend) of strong FP languages.
         | 
         | The vast majority of FP languages offer no totality checking.
         | Totality checking is mainly used in theorem provers, which are
         | a small subset of FP. I'm not sure what you're going at here,
         | but you're confusing two separate things.
        
         | delta_p_delta_x wrote:
         | > Our hardware has state. Any IO is a side effect, so what's
         | the non trivial use case for "pure functions" all the way down?
         | 
         | I think James Mickens in _The Night Watch_ [1] puts it best:
         | 
         | > _Pointers are real. They're what the hardware understands.
         | Somebody has to deal with them. You can't just place a LISP
         | book on top of an x86 chip and hope that the hardware learns
         | about lambda calculus by osmosis._
         | 
         | [1]: https://www.usenix.org/system/files/1311_05-08_mickens.pdf
        
           | lupire wrote:
           | That just means a compiler is required.
           | 
           | What percent of your last program was written in register
           | move commands?
        
           | trealira wrote:
           | You could say the same thing about variables, expressions,
           | statements, and structured programming. It's a lot higher
           | level than assembly language or machine code programming.
           | 
           | > _Registers, addresses, and machine instructions are real.
           | They 're what the hardware understands. You can't just place
           | an ALGOL book on top of an IBM 709 and hope that the hardware
           | learns about expressions, variables and records by osmosis._
        
         | abathologist wrote:
         | > Why is this taken as a given to be a good thing?
         | 
         | Aspiring to rigorous reason and maximum reasonability is
         | necessary for virtue. Maths is the way we reason rigorously
         | about space and time.
        
         | pjmlp wrote:
         | It allows the scalability of algorithms like Star Lisp on the
         | Connection Machine, the abstraction allows algorithms to be
         | easily moved from single thread , to multiple threads, multiple
         | cores, GPGPU.
         | 
         | It is no accident that CUDA and SYCL libraries are written in a
         | functional way, or the AI middleware for compute.
        
         | trealira wrote:
         | > Why is this taken as a given to be a good thing?
         | 
         | So that we can prove things about software more easily. If we
         | only write stateful imperative programs, we're stuck with tons
         | of Hoare triples and separation logic rather than being able to
         | use equational reasoning or type-driven deductions. For
         | example, this person wrote a parallel wc in Haskell, and they
         | used semigroups to parallelize the word counting.
         | 
         | https://chrispenner.ca/posts/wc
        
         | nyrikki wrote:
         | Very big difference between pure and total functions.
         | 
         | Pure functions are memoryless and deterministic.
         | 
         | Total functions are defined for all elements in its domain.
         | 
         | Square (x2) is a total function. Reciprocal (1/x) is not, since
         | 0 is a real number, but has no reciprocal.
         | 
         | "Total" and "pure" are orthogonal concepts.
         | 
         | A pure functions that raises an error with the input 0/0 is not
         | total, and you can have total functions that aren't pure.
        
           | mrkeen wrote:
           | > Total functions are defined for all elements in its domain.
           | 
           | Do you include "same-output-for-the-same-input" as part of
           | _defined_?
        
             | nyrikki wrote:
             | In context of This paper, where they are talking about
             | Total forms of pure functions, yes
             | 
             | But in general no.
             | 
             | I could say have a total function with an internal counter
             | that adds 1 each time it was run.
             | 
             | That would still be a total function, but not pure.
        
         | howerj wrote:
         | Yeah I don't think this is a good thing, I also wonder; which
         | mathematics? Mathematics is a big field with wildly different
         | and incompatible notations.
         | 
         | I think there's a lot to discover and apply to programming
         | languages, but the successful result of that will be proving
         | largely procedural programming languages as having certain
         | properties such as memory safety.
         | 
         | Correctness itself is overrated (even in a safety critical
         | field - you always assume the software is faulty and try to
         | make a design that mitigates potential faults). Software is for
         | utility, and even buggy software can have great utility.
        
         | samth wrote:
         | > Our hardware has state.
         | 
         | Indeed it does. But why is this opposed to a connection to
         | mathematics?
        
       | agentultra wrote:
       | I like how Lean4 handles totality. You can tell the compiler to
       | trust you. Or you can provide a proof. The latter is more useful
       | while the former is sometimes pragmatic.
        
       | llmzero wrote:
       | In page 2, there is the theorem: Theorem [?]n in Nat.f n (fib p)
       | (fib (p+1)) = fib (p+n), I think it should be for all p in Nat
       | (fib p) + (fib (p+1)) = fib(p+2), otherwise there is something
       | mysterious here.
        
         | mayoff wrote:
         | The fib' function (the more efficient version of fib) is
         | defined using f. So the goal is to prove that f computes the
         | same thing as fib (given the appropriate arguments to f and
         | fib). Hence the theorem needs to use f on at least one side of
         | the equation.
        
       | JonChesterfield wrote:
       | A few comments have cheerfully reproduced the stock objection
       | that a program with no side effects does nothing and is thus
       | useless. By extension functional programming is useless and it's
       | all pointless.
       | 
       | I'd like to counter that a function from vector of bytes to
       | vector of bytes can be pure, terminating, implement some
       | algorithm of great commercial interest and map exactly onto a
       | program that reads said vector of bytes from stdin and sends them
       | to stdout.
       | 
       | If you're really generous, it can write two vectors of bytes, and
       | tie the other one to stderr.
       | 
       | That covers such useless academic things as _your compiler_.
        
         | kstrauser wrote:
         | I guess that depends on whether you consider write(2) to have
         | side effects. No, really, I've been in the room when people
         | debated whether that was pure enough. It's an interesting
         | exercise to talk through, at least.
        
           | mrkeen wrote:
           | Even taking write as an effect (which I do), that doesn't
           | stop it from being a trivial attachment to an otherwise pure
           | artifact.                 IO in -> pure stuff of interest ->
           | IO out
           | 
           | The purity is in no way compromised by being IO-adjacent.
        
           | JonChesterfield wrote:
           | I think that's an aliasing question.
           | 
           | If write is sending bytes to somewhere else, it's pure. In
           | exactly the same way that returning a value type from a
           | function is pure, or passing a value type to some other
           | function is pure. Since they're all the same thing. A unix
           | process is a function in a literal sense.
           | 
           | If you've aliased read and write to the same thing then
           | mutation is back with us and sorrows return.
           | 
           | Vaguely tangential, it's totally possible to write a program
           | that uses read() to copy all of stdin to its own address
           | space allocated with brk(), does whatever maths it wants,
           | then sends some of the result back out with write(). Three
           | syscalls, useful userspace program, no libc involved.
        
         | jrvarela56 wrote:
         | This talk [1] made me conclude something along these lines. You
         | need side-effects to make anything useful, just make sure
         | they're segregated in a specific place in your program. This
         | makes it easier to make most of the program pure and make it
         | clear where the dangerous stuff happens.
         | 
         | [1] Boundaries by Gary B. from Destroy All Software
         | https://www.youtube.com/watch?v=yTkzNHF6rMs
        
       | munchler wrote:
       | I appreciate this article. It's clear, easy to understand, and
       | uses theory to propose practical improvements to FP.
       | 
       | Some minor nitpicks:
       | 
       | > In total functional programming [?] does not exist.
       | 
       | That doesn't seem right. In total FP, we can still define a [?]
       | type that has no values. I think the problem here is that the
       | author uses [?] to indicate both a type and a value of that type
       | (e.g. f [?] = [?]), which is confusing.
       | 
       | > 0 / 0 = 0
       | 
       | Surely it would be better to use a Maybe type for this instead?
       | 0 / 1 = Just 0         0 / 0 = Nothing
       | 
       | The same approach also addresses the difficulty of hd:
       | hd :: List a -> Maybe a         hd (Cons a x) = Just a         hd
       | Nil = Nothing
       | 
       | F# has this function, and calls it "tryHead", rather than just
       | "head".
        
         | lgas wrote:
         | > Surely it would be better to use a Maybe type for this
         | instead? > > 0 / 1 = Just 0 > 0 / 0 = Nothing
         | 
         | This would make it impossible to compose mathematical
         | expressions of more than one operation without discharging the
         | Maybe which would get ugly fast.
        
           | munchler wrote:
           | But that's the true nature of division. Unless one can prove
           | that the denominator is non-zero, then the computation is not
           | well defined.
           | 
           | Practically speaking, languages like Haskell and F# have a
           | monadic syntax that makes this considerably less ugly.
        
             | trealira wrote:
             | It seems like it would be more practical to create non-zero
             | integer/other number types, and only those can be used as
             | denominators. But it would also be nice to not have to make
             | a custom subtype every time you needed something with a
             | specific property.
        
               | munchler wrote:
               | Agreed. I think this is where dependent types become
               | useful, but that is a whole can of worms.
        
             | nyrikki wrote:
             | Division isn't closed on the natural numbers.
             | 
             | Total functions have to return either true or false
             | explicitly, failure as negation doesn't work.
             | 
             | That is where you get Godel's 'this statement is false'
             | from.
             | 
             | This is the law of the excluded middle that broke the
             | Mathmatica Principia.
             | 
             | Plato, Aristotle, and Russell's use of the law of the
             | excluded middle is what Godel, Church, and Turning
             | leverages.
             | 
             | For problems in P this distinction is harder to see because
             | P=co-P but we think NP!=co-NP.
             | 
             | As NP are by definition decision problems this is
             | important.
             | 
             | NP being the provable yes-instances and co-NP being the no-
             | instances, even proving a function is total is hard.
             | 
             | If you accept failure as negation you may be able to build
             | a sun-Turing machine that always halts but you won't be
             | able to show it is complete and consistent.
             | 
             | That is why ZF disallows statements like 'this statement is
             | false'.
        
               | mrkeen wrote:
               | > even proving a function is total is hard.
               | 
               | It's best left to the compiler.
               | 
               | But if you're considering doing it, don't imagine
               | yourself taking an existing function like qsort and
               | trying to prove it's total. Instead try building up a new
               | function and only build it out of other total functions.
        
               | nyrikki wrote:
               | If your language is TC and thus supports general
               | recursion, no algorithm can possibly exist for deciding
               | whether a function is in fact total in the general case.
               | 
               | As all primitive recursive functions are provably total,
               | that is a place a compiler can work. But just using for
               | loops gets you there too.
               | 
               | Total functions that are also pure functions are a pretty
               | small set so there may be other reductions like the
               | above. But don't depend on the compiler.
               | 
               | But I agree writing total functions from the start is the
               | ideal if possible.
        
             | madmax96 wrote:
             | Dependently typed languages make it even better.
             | 
             | For example, in Lean4:                 def myDiv (numerator
             | : Nat) {denominator : Nat} (denominatorNotZero :
             | denominator [?] 0) : Nat         :=         if denominator
             | > numerator then           0         else            1 +
             | myDiv (numerator - denominator) denominatorNotZero
             | -- Example usage.       example : myDiv 1 (denominator :=
             | 1) (by simp) = 1 := rfl       example : myDiv 120
             | (denominator := 10) (by simp) = 12 := rfl
             | 
             | You have to submit a proof that the denominator is non-zero
             | in order to use `myDiv`. No monad required ;).
        
             | jimbokun wrote:
             | Can Haskell enforce a type of "any integer other than
             | zero"?
             | 
             | I feel like you would just end up with the equivalent of
             | Maybe, but not sure.
        
           | mrkeen wrote:
           | It _was_ ugly. And then do-notation arrived in around 1999 (
           | _I think_ - I 'd be super happy if someone can find a precise
           | year for me) and it became a solved problem.
           | 
           | Just in time for people to ignore it and start complaining
           | about the ugliness of flatMap from 2015 onwards.
        
       | pmcjones wrote:
       | Did anyone notice the fib function on the first page? It doesn't
       | terminate:
       | 
       | fib :: Nat->Nat
       | 
       | fib 0 = 0
       | 
       | fib 1 = 1
       | 
       | fib (n+2) = fib (n+1) + fib (n+2)
       | 
       | The last line should be fib (n+2) = fib (n+1) + fib (n). The
       | published version seems to have the same problem:
       | https://pdfs.semanticscholar.org/82b4/ea72b89270c528006dd253...
        
       | 082349872349872 wrote:
       | See also discussion (2007) at http://lambda-the-
       | ultimate.org/node/2003
       | 
       | (which mentions
       | http://pll.cpsc.ucalgary.ca/charity1/www/home.html )
        
       ___________________________________________________________________
       (page generated 2024-03-07 23:01 UTC)