[HN Gopher] Propositions as Types (2014) [pdf]
       ___________________________________________________________________
        
       Propositions as Types (2014) [pdf]
        
       Author : nill0
       Score  : 93 points
       Date   : 2025-05-06 11:36 UTC (11 hours ago)
        
 (HTM) web link (homepages.inf.ed.ac.uk)
 (TXT) w3m dump (homepages.inf.ed.ac.uk)
        
       | elikoga wrote:
       | In my opinion it's a tragedy there are so few resources in using
       | "Propositions as Types"/"Curry-Howard correspondence"[0] in
       | didactics in tandem with teaching functional programming to teach
       | structured proof-writing.
       | 
       | Many students do not feel comfortable with proof-writing and
       | cannot dispatch/discharge implications or quantifications
       | correctly when writing proofs and I believe that a structured
       | approach using the Curry-Howard correspondence could help.
       | 
       | [0]:
       | https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
        
         | mietek wrote:
         | One of the best modern resources is "Programming language
         | foundations in Agda", co-authored by Wen Kokke and the same
         | Philip Wadler, and used for teaching at multiple universities.
         | 
         | https://plfa.github.io/
        
         | jerf wrote:
         | While I am first in line to say that programming is math
         | whether you like it or not, it is certainly the case that few
         | programmers are using that sort of math in their heads when
         | they are programming. I suspect that if, oh, about 60-70 years
         | of trying to change that has not had any headway that there is
         | little reason to suspect that's going to change in the next 30.
         | 
         | But you can sort of backdoor the idea with something like this:
         | https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...
         | I won't claim it is exactly the same, but it is certainly
         | similar. Tell a programmer, "Hey, you can build a type that
         | carries certain guarantees with it, so that _the rest of the
         | program doesn 't need to be constantly validating it_", a
         | problem any 5+ year dev ought to be very familiar with, and I
         | think you make more progress in the direction of this notion
         | than a paper full of logic diagrams in a notation even most
         | computer scientist undergrads will never encounter.
         | 
         | (I'm not even certain if I directly encountered that in my
         | Master's degree. It's been a while. I think I did, but I think
         | it was only one class, and not a huge part of it. I'm not sure
         | because I know I did more with it on my own, so I can't
         | remember if I was formally taught it or if I picked it up
         | entirely myself.)
        
           | skybrian wrote:
           | While I generally agree with defining new types to assert
           | that validation has been done, I think your blog post could
           | have explained more about what kinds of validation are
           | practical to do. For example:
           | 
           | > Address that represents a "street address" that has been
           | validated by your street address to exist
           | 
           | What does it even mean to verify that a street address
           | exists? Verifying real-world relationships is complex and
           | error-prone. I'm reminded of the genre of articles starting
           | with "Falsehoods programmers believe about names" [1].
           | 
           | In practice, the rules that can be enforced are often imposed
           | by the system itself. It simply doesn't accept data that
           | isn't in correct format or isn't consistent with other data
           | it already has. And we need to be cautious about what harm
           | might be caused by these rejections.
           | 
           | Having the validation logic in one place will certainly help
           | when fixing mistakes, but then what do you do about data
           | that's already been accepted, but is no longer "valid?" This
           | sort of thing makes long-running systems like databases hard
           | to maintain.
           | 
           | [1] https://www.kalzumeus.com/2010/06/17/falsehoods-
           | programmers-...
        
             | fmbb wrote:
             | Far too many programmers forget that time passes.
        
               | daxfohl wrote:
               | Yeah, the issue is that proofs are harder than people
               | think, even for trivial things (try a few easy leetcode
               | problems in Lean with a proof of correctness), and less
               | useful than people think, especially when you get past
               | low level algorithms and into domain logic (your point
               | exactly). They also don't serialize well, so a database
               | or API call with a "proof" field would be susceptible to
               | fudging, or the definition could change between
               | deployments. They're also easy to make incompatible: one
               | library requires bounds proofs for signed ints, but your
               | app uses unsigned ints, so you have to either rewrite
               | your app or rewrite the library, or cast, in which your
               | type system has to handle like a "checked exception" and
               | propagate that possibility throughout your whole type
               | domain and app logic.
               | 
               | I'm pretty convinced there's a good reason that while
               | "propositions as types" is cool in theory, it's unlikely
               | we'll ever see it catch on in practice.
        
             | jerf wrote:
             | > I think your blog post could have explained more about
             | what kinds of validation are practical to do
             | 
             | Perhaps following the two links with the word "valid" in
             | them to will answer your concerns:
             | https://jerf.org/iri/post/2023/value_validity/
             | 
             | Note that article does explicitly have the sentence "Let's
             | forget the Umptydozen Falsehoods Programmers Believe About
             | Addresses for the sake of argument and stipulate a perfect
             | such function." These are examples. Trying to write "here's
             | how to validate everything that could ever happen and all
             | also here's a breakdown of all the falsehoods and also
             | here's how it interacts with all your other logic" is not
             | exactly a blog post so much as a book series. It turns out
             | that even if you completely ignore the Umptydozen
             | falsehoods of all the various kinds, you _still_ have
             | plenty of validation problems to talk about!
             | 
             | However, the in-a-nutshell answer to "how do you handle
             | time invalidating things" is that you treat your database
             | as an untrusted store and validate things as needed. I'm
             | actually an 80/20 guy on using databases to maintain
             | integrity for much this reason; I love me some foreign keys
             | and I use them extensively but the truth is that that is
             | only a partial solution to the data validity problem no
             | matter how clever you get, and temporal inconsistency is a
             | big one. Once you have any source of inconsistencies or
             | errors in your DB, a whole whackload of need for validation
             | and care basically comes dropping in all at once, or, to
             | put it another way, if you're not 100.00000% successful at
             | maintaining data integrity, the next practical option is
             | 95%. There is no practical in-between, because even that
             | .001% will end up driving you every bit as crazy as 5%
             | being wrong in most ways.
             | 
             | But that's also out-of-scope for blog posts targeted at
             | people who are only doing ad-hoc validation whenever they
             | are forced to. Learn how to validate properly at all, then
             | get better when you have a time-based problem.
        
               | skybrian wrote:
               | Good article. Yeah, I wouldn't expect a full explanation,
               | just some kind of "here be dragons" caveat. Perhaps a
               | hyperlink alone is a bit too subtle since readers aren't
               | always going to dereference it. (And there's some irony
               | there, given the subject of the linked article.)
               | 
               | The types in Go's template/html package are a pretty
               | interesting example of using types tactically to indicate
               | validity. The _HTML_ type is used to turn off HTML
               | escaping when it's already been done. It's using a type
               | as a _loophole._ It's still very useful to have a type
               | like that when reviewing code for security bugs, because
               | you know where to look for problems. Unsafe sections in
               | Rust serve a similar purpose.
               | 
               | Types are about creating trust, and this trust is often
               | short-lived. When data crosses a security boundary, the
               | validation has to done again.
        
           | immibis wrote:
           | This has nothing to do with C-H though.
        
             | jerf wrote:
             | It isn't exactly the same. I believe I said that up front.
             | 
             | But it provides a bridge. Both things have "You can assert
             | Property X about all things of type Y" built into them.
             | Trying to jump people straight to C-H without giving any
             | hint whatsoever of why this may be useful in any real
             | code... well, that's a non-trivial percentage of why this
             | approach has completely failed to capture any significant
             | mindspace, though I leave it as an exercise to the reader
             | to determine the exact percentage in their minds.
        
         | mrkeen wrote:
         | This may not go over as well as you'd think.
         | 
         | I took Haskell in the same uni course with FSAs, Turing
         | machines, Lambda calculus, natural deduction, Hoare logic and
         | structural induction.
         | 
         | So I was exposed to the "base case & step case" of structural
         | induction at the same time as learning about it in Haskell. And
         | for whatever reason it didn't leave a good impression.
         | Implementing it formally was harder and more error-prone than
         | shooting from the hip in an IDE. What was the point!?
         | 
         | Now I smash out Haskell code daily as the quickest way to end
         | up with little binaries that just-work-if-they-compile. It took
         | me a while to realise that the upside of all this
         | formality/mathiness is that _other people did the proofs so
         | that you don 't need to_. I get to take for granted that a
         | boolean is true or false (and not some brilliant third value!)
         | and that (map f . map g) is (map (f . g)).
        
           | AnimalMuppet wrote:
           | "A man is rich in proportion to the number of things which he
           | can afford to let alone." -- Henry David Thoreau, Walden or,
           | Life in the Woods
           | 
           | If I have to do the proof, then as you say, it's probably
           | harder and (in many cases) more error prone than if I didn't
           | use a proof. If the language can do it for me, and I get the
           | lack of errors without having to do the work (and without the
           | chance of me making the mistakes)? Yeah, now we're talking.
           | (Yes, I am aware that I still have to design the types to fit
           | what the program is doing.)
           | 
           | If a tool introduces complexity, it has to make up for it by
           | eliminating at least that much complexity somewhere else. If
           | it doesn't, the tool isn't worth using.
        
           | edef wrote:
           | > I get to take for granted that a boolean is true or false
           | (and not some brilliant third value!)                 [True,
           | False, undefined] :: [Bool]
        
             | immibis wrote:
             | The traditional third value is actually FileNotFound:
             | https://thedailywtf.com/articles/what_is_truth_0x3f_
             | 
             | but in Haskell, yes, it's undefined. Which isn't a real
             | value! For example, infinite loops are undefined. Theorists
             | like to call it a value of every type, but in practical
             | terms, it's more like a computation that never produces a
             | value. The builtin "undefined" can be written as an
             | infinite loop ("undefined = undefined") and many other pure
             | infinite loops can also act as undefined values. The
             | runtime is able to catch some and crash instead of hanging,
             | but not others.
        
             | ddellacosta wrote:
             | See, you can even do quantum computing very naturally in
             | Haskell.
        
           | mpweiher wrote:
           | > other people did the proofs so that you don't need to.
           | 
           | So?
           | 
           | You don't need to either.
           | 
           | There simply is no evidence that having such proofs has any
           | significant effect on software quality.
           | 
           | https://blog.metaobject.com/2014/06/the-safyness-of-
           | static-t...
        
         | talkingtab wrote:
         | The connection to intuitionism (see
         | https://en.wikipedia.org/wiki/Intuitionism) gives this kind of
         | thinking much broader appeal and application. It seems to me we
         | live in a time so dominated by analytical thinking that we
         | completely ignore other useful and effective modes.
        
           | johnnyjeans wrote:
           | Intuitionism in this context just means the proofs have to be
           | constructive. (no proof-by-contradiction, or other half-assed
           | academic hacks)
        
             | AnimalMuppet wrote:
             | Why do you say that proof-by-contradiction is a "half-assed
             | academic hack"?
             | 
             | In particular, if someone isn't already an intuitionist or
             | constructivist, what reason can you give that would be
             | valid in their frame of reference?
        
               | johnnyjeans wrote:
               | Because they explain nothing and aren't useful for
               | building new insights. It's not a direct verification of
               | the existence or lack-of-existence of a thing with given
               | properties. It relies on rules lawyering in a very
               | specific logic to say something with basically no
               | substance. Allowing it causes more problems for automated
               | proofs than they solve, so long as your domain only deals
               | with finites (which all real-world domains do
               | exclusively.)
               | 
               | They're also generally much easier to create than
               | constructive proofs, which makes them a convenient go-to
               | for lazy academics.
        
               | ndriscoll wrote:
               | Proof by contradiction is fine for lack-of-existence
               | (indeed, !P is defined in type theory as P -> false). I
               | think also if I got my types right, you can do
               | def eliminateTripleNot(f: !!!P): !P = {p: P => f({g: !P
               | => g(p)})}
               | 
               | For a constructive proof of !!!P => !P? So it's really
               | just !!P => P that causes trouble. It's not clear to me
               | though whether LEM is actually okay in the "fast and
               | loose reasoning is morally correct" sense (i.e. if it's
               | okay as long as you don't cheat and do something like use
               | a non-terminating loop or exception to make your !!P)
               | though? Are there cases where you didn't "obviously"
               | cheat where it's "wrong" to use it? In some sense, I see
               | !!P => P as a "cast" from an opaque function type to
               | specifically be a closure {f => f(p)} for some p: P.
        
               | cmrx64 wrote:
               | that isn't proof by contradiction, that's plain old proof
               | of negation. proof of negation is Neg : (P - False) - !P,
               | proof by contradiction is PBC : (!P - False) - P. a
               | subtle yet crucial distinction
        
               | ndriscoll wrote:
               | Ah, right. It's been long enough that I've forgotten what
               | the words mean, though I _think_ with my ninja edit, PBC
               | is actually still constructively valid as a method to
               | prove a negation?
        
               | Tainnor wrote:
               | > They're also generally much easier to create than
               | constructive proofs
               | 
               | This is generally seen as a good thing by most "lazy
               | academics". I guess your priorities are just different.
               | 
               | Constructivism is also not at all opposed to infinite
               | quantities (that would be finitism). "Real-world domains"
               | deal with infinite spaces (e.g. of all functions R->R)
               | quite regularly across all scientific domains.
        
               | TimorousBestie wrote:
               | This is needlessly aggressive. There's nothing wrong with
               | assuming the axiom of choice to prove a result, just as
               | there's nothing wrong with trying to develop a proof that
               | doesn't rely on it. Saying a nonconstructive proof
               | "explains nothing" is myopic at best. Insights come from
               | everywhere.
        
               | zozbot234 wrote:
               | As others have explained, there is indeed nothing wrong
               | with "proof by contradiction" of a _negative_ statement.
               | Intuitionistic logic does not view statements phrased in
               | the  "positive" and in the "negative" as fully
               | equivalent, as classical logic does. (There are ways of
               | formalizing this point of view quite rigorously, such as
               | in so-called "ecumenical logics" where classical and
               | constructive/intuitionistic reasoning can in fact coexist
               | and interoperate, but statements derived from classical
               | reasoning can only be translated in the negative as seen
               | within constructive reasoning.)
        
               | AnimalMuppet wrote:
               | So I asked specifically for you to assume that I don't
               | already agree with intuitionist logic. Now, assuming
               | that, what is wrong with proof by contradiction of a
               | positive statement?
               | 
               | Both you and johnnyjeans gave me answers that already
               | assumed intuitionism. I don't assume that. Can you give
               | me any reasons that start from where I am and show my why
               | I _should_ adopt intuitionism?
        
         | ndriscoll wrote:
         | I've been thinking the same thing and mentioned it the other
         | day[0]. When I was learning proofs, I saw people struggle with
         | the idea of needing to choose a "generic" element to prove a
         | forall statement, for example. I suspect it might make more
         | sense if you taught things in terms of needing to write a
         | function x=>P(x). I think in some cases, thinking in terms of
         | programming might also change how we think about structuring
         | things. e.g. define a data structure for a "point with error
         | tolerance" (x, epsilon), then continuity says given a
         | toleranced-point y* at f(x), I can find a toleranced-point x*
         | at x such that f(x*) is "compatible" (equal at the base point
         | and within tolerance) with y*. This factoring lets you avoid
         | shocking new students with quantifier soup. Likewise the chain
         | rule is just straightforward composition when you define a
         | "derivative at a point" data structure Df((x,m))=(f(x),
         | m*f'(x)).
         | 
         | This is not at all new, but I suppose it's currently a lot to
         | ask students to learn proof mechanics while also unwinding
         | multiple layers of definitions for abstractions. Computers can
         | help do the unwinding (or lifting) automatically to make it
         | easier to make small "quality of life" definitions that
         | otherwise wouldn't be hugely useful when pen-and-paper-proofs
         | would always be unwinding them anyway.
         | 
         | Basically, math education could look a lot like software
         | engineering education. The concerns and solution patterns are
         | basically the same. e.g. typeclasses are pretty much how
         | mathematics does polymorphism, and are probably usually the
         | right way to do it in programming too.
         | 
         | [0] https://news.ycombinator.com/item?id=43875101
        
           | immibis wrote:
           | I don't think the C-H correspondence is necessary for this.
           | It would be a useful way to think even if the C-H
           | correspondence were false.
        
         | js8 wrote:
         | I would recommend https://lean-
         | lang.org/theorem_proving_in_lean4/, especially the first few
         | chapters, if you're willing to use Lean.
        
         | ngruhn wrote:
         | 100% agree. I did not understand induction until I learned Coq.
         | It really shows how mechanical proving can be.
        
         | boxfire wrote:
         | There's a book that's explicitly about this, "Program = Proof",
         | and though it's not beginner and needs maybe a light version
         | for earlier learners, is an excellent example.
        
       | marcusb wrote:
       | The author has given a few talks built around the same concept:
       | https://www.youtube.com/watch?v=IOiZatlZtGU
        
       | indyjo wrote:
       | As much as the concept blew me away when I first heard of it, I
       | can't shake the feeling that the Curry-Howard correspondence is
       | somehow mis-marketed as something that would immediately cater to
       | programmers. The idea to encode propositions into types sounds
       | enticing for programmers, because there are indeed a lot of cases
       | where something needs to be conveyed that can't be conveyed using
       | type systems (or other features) of common programming languages.
       | 
       | Examples include the famous "the caller of this function
       | guarantees that the argument is a non-empty list" but also "the
       | caller guarantees that the argument object has been properly
       | locked against concurrent access before using the function".
       | 
       | However, in my experience, the community is more interested in
       | mathematics than in programming and I don't know if anybody is
       | really working to provide propositions-as-types to mainstream
       | programmers. This is certainly because it is hard enough to prove
       | soundness of a strict functional programming language as Agda or
       | Rocq, much more for anything imperative, stateful, "real-world",
       | non-strict, non-pure, concurrent, ill-defined, you name it.
       | 
       | So, for me the promise of "showing programmers that what they do
       | is actually mathematics" is not really kept as long as the
       | definition of "programmer" is so narrow that it excludes the vast
       | majority of people who define themselves as programmers.
       | 
       | What I'm trying to say: I wish there were more efforts to bring
       | more powerful formal methods, especially as powerful as dependent
       | types, to existing mainstream programming languages where they
       | could be useful in an industrial context. Or at least try to come
       | up with new programming languages that are more pragmatic and do
       | not force the programmer into some narrow paradigm.
       | 
       | Am I wrong? I hope so :)
        
         | ndriscoll wrote:
         | Programmers regularly do this stuff under various
         | names/phrases. "Make illegal states unrepresentable", "parse,
         | don't validate", "resource acquisition is initialization", etc.
         | It's all about encoding the fact that your structure isn't just
         | data, but also represents some proof of provenance that
         | guarantees some property. Scala is an industrial language that
         | gives more tools than you might usually find for this while
         | also letting you be pragmatic and integrate with Java's massive
         | ecosystem (though you quickly learn you'd rather not because
         | Java code tends to not do a good job of leveraging types in a
         | sane way).
        
         | AnimalMuppet wrote:
         | No, you're not wrong. I mean, in some circles, it's a battle to
         | get programmers to use types _at all_. And, while not every
         | proposition can currently be usefully encoded in a type, every
         | type currently encodes a proposition. So if we can get the
         | people who don 't use types to start using them, that's
         | probably the lowest-hanging fruit.
         | 
         | From there, every step to improve the expressiveness of types
         | allows you to encode more within the type system. For example,
         | one could think about encoding that non-empty requirement in
         | C++ or Java collection types. It would be nontrivial - a number
         | of things would need adjusting - but you could think about
         | doing it. (Or rather, it would be nontrivial to do with static
         | types. You could more easily make it throw if the condition was
         | not satisfied.)
         | 
         | Your "properly locked" example is much harder. It would require
         | telling the type system what the definition of "properly
         | locked" is. Even at that, I can't see any way to do it as a
         | static type. And that's a bummer, because I far prefer static
         | types. I prefer my proofs to fail at compile time.
         | 
         | But my main point is, "proving" is not binary. Every
         | incremental advance in what types can check proves more things,
         | and therefore leaves the programmers having to prove less in
         | their heads.
        
           | daxfohl wrote:
           | And it is continually improving. Rust borrow checker is an
           | example of this.
           | 
           | But as far as jumping into the deep end of dependent types,
           | that's a whole other ball of wax. Like, imagine (or try!)
           | writing leetcode solutions in Lean or Idris, with a type that
           | proves they are correct and (bonus points) run in the
           | specified time complexity. Even defining the hypothesis is
           | non trivial. It's all doable, but takes orders of magnitude
           | longer. But with dependent types you have to do it because
           | the next function you call may require a proof that the leet
           | function returns the thing it's supposed to.
           | 
           | That's just for algorithmic leet problems. Now imagine having
           | to write proofs in a complex multithreaded system that your
           | code is not accessing out of bounds arrays or leaking memory,
           | and integrating libraries that each have slightly different
           | semantics for doing the same, or they use unsigned types that
           | make all the proofs incompatible, etc. At that point, you
           | have to basically give up on your borrow checker and fall
           | back to using runtime checks anyway. And even if you did get
           | your system into a fully proved state, that only applies to
           | that one binary; it makes no guarantees about distributed
           | interactions, rollouts and rollbacks, or any of the other
           | things that are the more frequent cause of bugs in production
           | systems. In fact, it may encourage more 'distributed
           | spaghetti" just to work around having to prove everything.
           | 
           | There's an analogy with how checked exceptions work in Java:
           | cool thought, but mostly get in the way of what you're really
           | trying to do after a while, or break things when new types of
           | exceptions get added to the function, so everyone ends up
           | just wrapping them with unchecked exceptions anyway. This is
           | what would end up happening with full dependent types too,
           | except it would pervade the entire type system and every
           | function you write. The eventual outcome would likely be
           | everyone just works around them (every single function, even
           | fairly trivial ones like divide, would return an Option that
           | the caller would have to handle or bubble up), and the actual
           | code would be even less type safe than it would be with a
           | simpler type system.
           | 
           | So, ultimately the way Rust is going, where some key things
           | like borrow checking, are built into the language, seems to
           | be the better approach.
        
         | zmgsabst wrote:
         | Programmers use it all the time!
         | 
         | They turn diagrams representing the logic of the business
         | domain into expressions in their type system of choice.
         | 
         | I think it's a failure of pedagogy to focus on the abstract
         | "prove stuff" sense, rather than the applied "domain diagram
         | logic to types" at the core of what SDEs actually _do_. But
         | turning a business domain diagram into a systems diagram into
         | code _is_ translating a category into another via embedding
         | before translating into a type theory.
         | 
         | My opinion is that you need to start with that core and explain
         | how the other features make that better, eg, how does
         | polynomial monads address the weirdness between logging and
         | errors (and errors in logging errors, etc)?
        
         | immibis wrote:
         | C-H isn't useful to programmers at all. The programs that
         | represent proofs end up being useless programs, most of the
         | time, and the proofs represented by real-world programs end up
         | being useless proofs, most of the time.
         | 
         | Most programs deal with several objects of the same type - your
         | program probably contains more than one integer (shocking,
         | right?). Since C-H maps each unique type to a unique proof
         | term, the same type maps to the same proof term. A function
         | that calculates greatest common factor of two numbers proves
         | that (A[?]A)-A, or "(A and A) implies A" which is...
         | uninteresting.
         | 
         | In the reverse direction, the program generated by (A[?]A)-A is
         | either the first or second entry selector from a tuple, and
         | doesn't calculate the greatest common factor of two integers.
         | (A[?]A)-A is slightly more interesting, taking Either<int, int>
         | and returning int.
         | 
         | It's true that C-H of logical disjunction gives you Either
         | types, conjunction gives you tuple types, and implication gives
         | you function types. Kinda fascinating, enough to write a blog
         | post about, but still without any practical purpose.
         | 
         | Practical types with additional requirements (such as non-empty
         | lists) have nothing to do with C-H and could exist even if C-H
         | were false. Same for dependent types. As I said, C-H is
         | completely useless in actual programming.
         | 
         | I do wish we had more formal methods though. Formal methods of
         | programming have nothing to do with C-H either. Static typing
         | has nothing to do with C-H. The Rust borrow checker has nothing
         | to do with C-H. Complete formal verification (as in seL4) has
         | nothing to do with C-H.
        
         | BalinKing wrote:
         | I think Idris (https://www.idris-lang.org/) is primarily
         | focused on using dependent types for programming, and even Lean
         | has been expanding in this direction (cf. the _Functional
         | Programming in Lean_ book, https://lean-
         | lang.org/functional_programming_in_lean/).
        
       | aatd86 wrote:
       | Could have written further into set theory and set theoretic
       | interpretation of types.
       | 
       | A proposition being a set intension.
        
       | js8 wrote:
       | I really recommend [56] M. H. Sorensen and P. Urzyczyn. Lectures
       | on the Curry-Howard isomorphism. Elsevier, 2006 on this topic. I
       | found it online.
       | 
       | It is very self-contained and explains some interesting topics
       | such as relation of intuitionist and classical logic.
        
       | immibis wrote:
       | The Curry-Howard correspondence seems like one of those things
       | that's weird and unexpected but not actually very useful?
       | 
       | Most of the types that correspond to propositions, and programs
       | that correspond to proofs, aren't of much utility in actual
       | programming. And most of the useful programs and their types
       | don't correspond to interesting proofs and propositions.
       | 
       | The paper relates it to other connections between fields, such as
       | Cartesian coordinates linking geometry and algebra. This allows
       | you to bring the power of algebra into geometric problems and
       | geometric intuition into algebra problems. But does the Curry-
       | Howard correspondence bring similar powers?
       | 
       | --
       | 
       | This kind of rearrangement without really changing anything
       | reminds me of the equivalence between first-order logic terms and
       | sets. Consider [P] to be the set of models where statement P is
       | true - then [A[?]B] = [A][?][B], [A[?]B] = [A][?][B] and so on.
       | But this doesn't lead to any new insights. You basically just
       | wrote the same thing with different symbols.
       | 
       | In some contexts (proving soundness/completeness) it can allow
       | you to step down one turtle, but you still have aleph-null
       | turtles below you so that doesn't seem that useful either.
        
       | emorning3 wrote:
       | I dont want to prove that my code is correct.
       | 
       | I would prefer to create specifications in logic and have the
       | code derived for me.
       | 
       | Other than something like Prolog, are there tools for working
       | this way?
        
         | Tainnor wrote:
         | This is impossible to solve in the general case due to Godel's
         | first incompleteness theorem.
        
         | emorning3 wrote:
         | PS...
         | 
         | I've been waiting for such tools for 40 years.
         | 
         | I just now realized that vibe coding is probably the closest
         | we'll get in my lifetime.
         | 
         | It's a start I guess.
        
         | Gajurgensen wrote:
         | Program synthesis is of course very difficult in general,
         | especially if you want it to be entirely automated. One option
         | to make it more practical is to have the user drive synthesis
         | from specification to implementation via something which looks
         | like a sequence of tactics.
         | 
         | (I'll add a plug to some stuff we are working on at Kestrel: ht
         | tps://www.cs.utexas.edu/~moore/acl2/manuals/latest/index.h....
         | We've used the APT library to do stepwise refinements from
         | specs ACL2 to C code. Each step is something like "make
         | function tail recursive" or "switch to a new, isomorphic shape
         | of the data").
         | 
         | By the way, Curry-Howard offers a compelling insight here:
         | deriving programs from specifications (i.e. types+propositions)
         | may be the same process as deriving proofs from propositions.
         | So the two processes can in principle work the exact same way.
        
       | gitroom wrote:
       | I feel like every time I try to do super formal proofs for code,
       | it ends up way more brutal than I expect and slower than just
       | fixing stuff as I go. Still love types catching my mess-ups for
       | me though.
        
         | cbdumas wrote:
         | This is the eternal tradeoff of testing, brought to its logical
         | (no pun intended) conclusion. Testing (and formal verification)
         | don't get you anything for free: writing detailed specs or
         | tests is at least as difficult as writing a correct
         | implementation. In fact given a sufficiently detailed spec, a
         | program conforming to the spec can be derived automatically.
        
       ___________________________________________________________________
       (page generated 2025-05-06 23:01 UTC)