[HN Gopher] A dependently typed language for proofs that you can...
       ___________________________________________________________________
        
       A dependently typed language for proofs that you can implement in
       one day
        
       Author : caotic123
       Score  : 130 points
       Date   : 2021-10-02 03:50 UTC (19 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | choeger wrote:
       | The syntax is quite ... complex, I dare say. It looks like you
       | borrowed from Haskell, which already makes it hard to gain an
       | intuitive understanding of the syntax and added bars?
        
       | dwohnitmok wrote:
       | I find the often ML-inspired syntax too high of a bar for most
       | programmers to clear when being introduced to a language.
       | 
       | I think a syntax more similar to something like what is
       | introduced here: https://shuangrimu.com/posts/language-agnostic-
       | intro-to-depe... more accessible for a lot of programmers.
       | 
       | I separately think that proofs are actually a bit overblown when
       | it comes to dependent types and that dependent types are most
       | useful for specification, but often times could benefit from
       | "fake" implementations.
        
         | p4bl0 wrote:
         | Thanks for this fantastic link. I will share it with my
         | students at the end of my _intro to compilers_ class.
        
         | Zababa wrote:
         | I agree that ML-style function signature is probably the best
         | and most readable. `name: type` is a lot better than `type
         | name`. I would argue that `match x with` would be better than
         | `case x of`, to highlight that pattern matching is different
         | than your usual switch. Calling out explicitely the contructors
         | with `contructors` is a good idea, I think it can make things
         | easier for new people.
         | 
         | Edit: I wonder if it would help to also have a record type.
         | Enum allows you to have full ADTs, but named tuples are often
         | more useful than regular tuples.
        
         | caotic123 wrote:
         | The syntax you presented is _really_ very accessible. Of
         | course, it makes things a little more verbose, but I think you
         | are right, the path for bringing more attention to dependent
         | types is probably making more accessible also. Btw, great
         | article I will have a more detailed look after.
        
           | dwohnitmok wrote:
           | That syntax might be unacceptably verbose with too many
           | usually implicit arguments. However, I also think that
           | implicit arguments primarily are needed when working with
           | heavily-dependently-typed data structures, which I think is a
           | bad way of working with dependent types (e.g. I don't like
           | length indexed vectors).
           | 
           | Instead if you primarily work with non-dependent data
           | structures and then pass in dependently-typed invariants as a
           | separate argument, you can avoid a lot of the complexity
           | associated with dependent types. This allows your dependently
           | typed arguments to not have a runtime representation, e.g.
           | they could be given zero multiplicity in Idris. This in turn
           | allows you to easily substitute a property test or an
           | external solver (or just a human "trust me" override) instead
           | of being forced to always prove everything since you no
           | longer require a true implementation of a dependent type.
           | 
           | This in turn simultaneously severely reduces the need for
           | elaborate implicit arguments and makes verbosity much easier
           | to deal with.
        
             | arianvanp wrote:
             | This sounds interesting. Can you give a concrete example?
             | 
             | This seems to really fall into place with my feeling that
             | heavily dependently typed data types Ans programs using
             | them are hard to refactor as the invariants that you break
             | in the refactor cascade through your entire program.
             | 
             | Separating them probably makes life easier
        
               | ImprobableTruth wrote:
               | If you're familiar with refinement types, what he's
               | mentioning is using dependent types to achieve refinement
               | types.
               | 
               | Let's take the typical example of "lists of size n". It's
               | either a special inductive type vec A n, or a list l with
               | a proof that length l > 0. The signatures of a head
               | function (using Coq syntax, where this approach is most
               | common) would be:                 head_safe1 {A n} : vec
               | A (S n) -> A       head_safe2 {A} : forall (l : list A),
               | length l > 0 -> A.
               | 
               | In both cases you match on the list and prove the nil
               | case to be impossible.
               | 
               | One benefit is that one can easily stack predicates. Need
               | a list of length n that is also sorted? Just add an
               | argument of type Sorted l.
               | 
               | Programs that are 'truly' dependently typed have their
               | benefits, but they definitely can feel more rigid.
        
         | jb_s wrote:
         | I personally feel like I've just been smacked in the face with
         | a dump truck full of syntax that I don't even barely
         | understand, and I've used FP before. It probably actually is
         | simple but for me, I need a bit more explanation
        
           | alisonkisk wrote:
           | "smacked in the face", or "presented with something that
           | takes 5 minutes to get a handle on"?
           | 
           | Think about how long it takes to learn a new API or data
           | model. A few minutes to learn a syntax isn't a big deal.
        
             | spinningslate wrote:
             | "smacked in the face" might be a bit extreme, but I can
             | sympathise with GP's point. I read the first line of the
             | first example:                   A :: ~ * ~> * => {(list A)
             | :: |new |empty }.
             | 
             | And got stuck at the first hurdle. I started parsing it in
             | my mind and read "A such that tilde star arrow tilde double
             | arrow". I stopped there, and thought "I'm lost". I started
             | trying to interpret it: * might mean wildcard; single arrow
             | could be curried functions (per Haskell). But I don't know
             | what tilde means, nor the double arrow.
             | 
             | Now, of course, I can go and look it up: it'll all be
             | defined somewhere. But that first experience - that _very
             | first_ "I've never seen this before, it sounds interesting"
             | hit a road block right there.
             | 
             | Maybe not smacked in the face, but enough friction for me
             | to click the back button.
        
               | caotic123 wrote:
               | Yep, these codifications are just examples of what you
               | can do, as Vmladenov said, try reading the examples
               | above.
        
               | vmladenov wrote:
               | You can take issue with the order of the presentation but
               | the definition is in the following section. There's
               | nothing to look up, it's just an arbitrary choice by the
               | author.
        
         | amw-zero wrote:
         | How is ML syntax any different from Typescript?
        
           | andrewflnr wrote:
           | In nearly every aspect other than the "name: type" part. I
           | think they were talking about the whole thing. And indeed,
           | the language in the article they posted is not so far off
           | Typescript syntax-wise.
        
       | NancyFXTrade wrote:
       | Hi,
       | 
       | I am Nancy robert, a professional bitcoin miner and forex trader,
       | i can assist you on how to make money online by getting involved
       | with a high profitable business (Forex, Binary Options, Bitcoin
       | Mining ). Make huge profits weekly from your initial investment.
       | Do you know with my trading software and signals you can make a
       | good profit weekly if you invest with minimum of:
       | 
       | INVEST $500 GET $6000
       | 
       | INVEST $1500 GET $15,000
       | 
       | INVEST $3000 GET $30,000
       | 
       | INVEST $4000 GET $40,000
       | 
       | INVEST $5000 GET $50,000
       | 
       | But the major condition here, is that at the end of every trading
       | session, after successful withdrawal, we will have 20% of profit
       | made, leaving you with 80%.
       | 
       | This forex trading is helping the life's of me and my investors.
       | Get started today with trading and earn a living. Work from
       | anywhere and monitor your trade with your phone or computer.
       | 
       | Kindly reply me if interested or contact me on whatsapp
       | +13198204277 or Email:nancyrobertcryptoxpert@aol.com No risk
       | involved 100% safe.
        
       | anfelor wrote:
       | The Readme defines:                 // Data NonEmpty = | New a
       | (NonEmpty a)        NonEmpty          |A :: ~ * ~> * => {(list A)
       | :: |new}. // A list non-empty is list only with new constructor
       | 
       | The haskell datatype reads like a co-inductive definition: a
       | stream of list elements. But the NonEmpty in your language should
       | be                 data NonEmpty = New a (List a)
       | 
       | right?
        
         | caotic123 wrote:
         | Exactly, it does not necessarily means co-data, thank you for
         | reporting it, i will fix it.
        
       | mbid wrote:
       | _> Pompom is an attractive implementation of an extensional (!)
       | dependently typed language_
       | 
       |  _> Pompom provides [...] a strong normalization system_
       | 
       | How is this possible? Extensional dependent type theory has
       | undecidable term/type equality, so there cannot be a strong
       | normalization algorithm.
        
         | caotic123 wrote:
         | Oh, it is extensional in the sense of supporting K axiom,
         | actually (not identity as propositional <-> definitional) :).
        
           | caotic123 wrote:
           | As far i know, pompom has a decidable type check though.
        
       | imode wrote:
       | This isn't a one-day implementation.
        
       | cjfd wrote:
       | What exactly are the foundations of this? The rules on the page
       | suggest that it is basically the calculus of constructions but
       | the example involving the list suggests that there are inductive
       | types too. Are the inductive types part of the foundations and
       | omitted in the list of rules or are they something else that is
       | not part of the foundation?
        
         | caotic123 wrote:
         | Pompom does not offer inductive data types, instead, it
         | provides static symbols (as in the LF framework or lP-Calculus
         | Modulo). Of course, we do not use the rewriting foundation of
         | these frameworks, however, we apply a usual unification
         | algorithm to get the same power. There are a lot of things that
         | need proper formalization in the core, but the intention of
         | this work is not it _yet_. _To be fair, not inductive data
         | types as Coq /Agda._
        
           | [deleted]
        
       | baddash wrote:
       | So, as someone who is curious but have no idea of the theoretical
       | background here.. what should I learn in order to help me
       | understand the code + use this language in a basic way? I think
       | the idea of being able to construct proofs is pretty cool but I'm
       | pretty lost here.
        
         | caotic123 wrote:
         | I also recommend the software foundations in coq
         | https://softwarefoundations.cis.upenn.edu/.
        
         | evolveyourmind wrote:
         | If you want to learn more about the basics of theorem proving
         | through dependent types, Wadler's Agda tutorial at
         | https://plfa.github.io would be a good starting point
        
           | baddash wrote:
           | appreciate it, thanks
        
       | jimsimmons wrote:
       | I'm constantly baffled by what functional programmers call
       | "simple". I'm sure this language does a lot with little but
       | calling it simple is audacious to say the least. Why do so many
       | things happen in a single line? Lack of imperative constructs in
       | these languages forces one to nest a lot to be expressive and we
       | get a symbol salad. It's almost as if the code got passed through
       | a minifier/function inliner.
       | 
       | Unless FP embraces more modular and structured ways of
       | programming it'll always remain a niche. OCaml's 'let' and
       | Haskell's 'where' are steps in the right direction but we need to
       | go a lot farther. To think that any human, with any amount of
       | training, can parse such code in one pass is a fantasy. Since
       | parsing is part of the coding loop, developer productivity is
       | compromised massively.
       | 
       | Mathematics has a similar problem where single letter variable
       | names with tiny letters on all 4 corners are ubiquitously used.
       | There's definitely a tendency in the community to play "symbol"
       | golf. It's high time we improve ergonomics of both math and FP
       | because the rewards can be tremendous. Rust manages to do some of
       | this to great success and programmers have embraced it so well.
        
         | carlmr wrote:
         | I agree with you that spacing up everything with well-named
         | let-bindings is a great idea, but most ML-based languages do
         | support that.
         | 
         | >To think that any human, with any amount of training, can
         | parse such code in one pass is a fantasy.
         | 
         | This is exactly what I think about OOP object hierarchies,
         | where one function is distributed over 15 files and you need to
         | keep the state of the object in your mind when thinking about
         | what this function could do.
         | 
         | In contrast when reading F# I often only need a single-pass,
         | because with let bindings and the |> operator I can see how the
         | data goes in, what is done to it one step at a time, and what
         | comes out. It's as simple as it gets. Granted that single pass
         | might be slow, but that's because it is dense and doesn't
         | contain so much extraneous noise, like the boilerplate
         | standard-OOP creates.
         | 
         | If you then work with a team that uses proper railway oriented
         | programming. I.e. not only pure functions but that use the type
         | system to express failure. I.e. a sqrt function that returns
         | Some(sqrt(x)) for x >= 0 and None for x < 0. Then it becomes
         | even more simple and explicit to understand what can go wrong
         | and how that is dealt with. Instead of some sub-sub-sub-
         | function throwing an exception that you never heard about until
         | it happens at runtime.
         | 
         | You can almost never single-pass OOP code, especially if some
         | architecture astronaut that memorized the GoF book had their
         | way with it.
        
         | amw-zero wrote:
         | You learned one way of programming, and that's how you think.
         | Nothing wrong with that.
         | 
         | But, many people are used to the functional paradigm. That
         | that's baffling to you is strange. People have different
         | perspectives.
        
         | simion314 wrote:
         | >Mathematics has a similar problem
         | 
         | Really??? you want to do a search and replace all Math text
         | Like replace(" = " , " is equal ") ???
         | 
         | I think that abstraction is a feature we intelligent animals
         | have, it makes the text easy to read and understand, , and much
         | faster, the only downsides is people like you that can't jump
         | over the introduction and just go to the interesting chapter.
         | 
         | Though if I misunderstood you I wish to see how your suggested
         | Math looks like, how you can just jump at random location and
         | read and still understand stuff and how equations without one
         | letter symbols look like.
        
         | tome wrote:
         | > I'm constantly baffled by what functional programmers call
         | "simple".
         | 
         | This github repo is clearly not for "functional programmers"
         | nor a generalist audience on HN. It's for experts in type
         | theory, graduate students, academics etc.. The word "simple" is
         | relative to that particular audience.
        
           | jimsimmons wrote:
           | Posting to HN with a title that says "you can" makes your
           | point moot no?
        
             | gpderetta wrote:
             | As someone that has very little idea of what all of that
             | is, I think the title is great and certainly made me
             | consider looking into the implementation.
             | 
             | It might be wildly optimistic, but encouraging people to
             | learn new things, especially by trying, is always a good
             | thing.
        
             | tome wrote:
             | Hmmm yes perhaps. Imagine a submission titled "A linear
             | filter for DSP you can solder up yourself in a day". 95% on
             | HN _couldn 't_. Would the response "I'm constantly baffled
             | by what electrical engineers call "simple"." be warranted?
             | Maybe.
        
             | Zababa wrote:
             | I think that's the case for most DIY projects that we have
             | on Hacker News. Most assume that you have lots of X in the
             | first place. It can be lots of experience with programming
             | languages, it can be a lot of space, it can be tools.
        
         | Zababa wrote:
         | I think there is a big difference between functional
         | programmers and people making proofs with programs.
         | 
         | > Lack of imperative constructs in these languages forces one
         | to nest a lot to be expressive and we get a symbol salad
         | 
         | I don't think that's true. If you really have nothing in your
         | language, you can emulate this with small functions. Most
         | functional languages do have support for "let ... in" which
         | allows you to basically program like in an imperative
         | programming language. There's also another part: sometimes no
         | name is the best name. In imperative programming you sometimes
         | have to either nest a lot of function (like
         | `toto(titi(tata(x)))` with a lot more parameters), or give
         | temporary names that are sometiems not helping at all.
         | 
         | > Unless FP embraces more modular and structured ways of
         | programming it'll always remain a niche.
         | 
         | > To think that any human, with any amount of training, can
         | parse such code in one pass is a fantasy.
         | 
         | I could say the same for the usual OO/design pattern soup. The
         | difference is that in the last 30 years, lots of people made a
         | lot of money with OO training and consulting, and the same
         | didn't happen with FP. What I'm saying is that it's not an
         | inherent problem of FP, so we shouldn't treat it like such.
         | It's a training/cultural/popularity problem.
         | 
         | > It's high time we improve ergonomics of both math and FP
         | because the rewards can be tremendous. Rust manages to do some
         | of this to great success and programmers have embraced it so
         | well.
         | 
         | Rust is a good compromise I think. A strong base of FP and
         | imperative, while still allowing the useful OO paterns.
         | However, I think the best innovation of Rust is around tooling,
         | and this is where FP is often lacking. Cargo is a pleasure to
         | use for basic stuff (which is 95% of programming), the compiler
         | is very nice and gives useful error messages (unlike old
         | OCaml/C++ error messages). An interesting fact is that Rust's
         | error messages come from Elm, which is also a functional
         | programming language. You're right about the ergonomics of
         | Haskell and such being a bit rough sometimes, but what I find
         | really baffling is that it took so long to just care about
         | error messages. Rust also has great IDE support, rust-analyzer
         | is impressive. All of that to say that as much as Rust gets
         | some things right by allowing some imperative programming, it
         | also gets tooling right, and that matters at least as much to
         | the ergonomics of a language.
        
         | TeMPOraL wrote:
         | Let me play the devil's advocate. Since you mentioned math's
         | dense notation - perhaps people should slow down when reading
         | functional code too.
         | 
         | You say:
         | 
         | > _To think that any human, with any amount of training, can
         | parse such code in one pass is a fantasy._
         | 
         | But that's a big assumption: that you need, or want, to parse
         | such code in one pass. This works when code is so trivial you
         | have to speed-read it so your brain doesn't get bored when
         | going through heaps of letters saying very little - a common
         | problem in mainstream languages. Maybe dense functional code
         | should be read like mathematics - you might need to spend a
         | moment or three over that single line. If it takes 5x longer to
         | parse it, but it's 10x more expressive than equivalent "normal"
         | code, then that's still a 2x speedup.
         | 
         | There are also extra benefits of density/conciseness: brain's
         | equivalent of cache locality. You might need to spend more time
         | reading the thing the first time, but you won't be constantly
         | re-reading it while working on it, the way you'd do with
         | verbose, enterprise Java-style code. Dense notation persists in
         | sciences partly because of that effect.
        
           | kragen wrote:
           | I've heard that a typical reading speed for math papers is a
           | page per day. In extreme cases, you might organize a
           | semester-long graduate seminar so the participants can help
           | one another understand a 30-page paper that has a result they
           | think is important.
           | 
           | Math notation isn't designed to be as fast as possible to
           | read--not even as fast as possible to read _a given idea_ ,
           | much less _a given page_. It 's designed to be fast enough to
           | _write_ that you can write down several forms of the same
           | formula as you go through a calculation or other proof, or
           | that while arguing with someone about something you can write
           | down a formula on a blackboard without losing their
           | attention. Also, it 's designed to be compact enough that
           | subtle errors of logic have no place to hide (though often it
           | achieves its compactness by being ambiguous, which provides
           | them with a different place to hide: in what is not stated).
           | 
           | -- *** --
           | 
           | My favorite example of the benefits of math notation is the
           | derivation of the one-pass formula for population standard
           | deviation.
           | 
           | Because the average is defined as the ratio of the sum of the
           | data items to the number of data items, the canonical
           | definition of the population standard deviation is the square
           | root of the sum of the squares of the differences between the
           | individual data items and the ratio of the sum of the data
           | items to the number of data items. Now, aside from the fact
           | that this definition is pretty confusing, computing it that
           | way requires two passes over the data: once to compute the
           | sum of the data items, and a second time to compute the
           | differences. That means you need memory enough for all the
           | data, which can be a problem, and it also means that
           | incrementally updating the result for a new data item
           | requires you to recalculate the deviations of all the
           | previous data items.
           | 
           | The one-pass, online method is to calculate the square root
           | of the reciprocal ratio between the number of data items and
           | the difference between the sum of the squares of the data
           | items and the ratio between the square of the sums of the
           | data items and the number of data items. That's not just hard
           | to _derive_ , it's hard to even _understand_.
           | 
           | That is to say, [?]((S __ ( _x_ 2) - (S _x_ )2/ _n_ )/ _n_ ).
           | If you're unfamiliar with math this will seem like Greek to
           | you (partly because it is in fact part Greek), but it's
           | actually not that much harder to understand than the plain
           | English definition in the previous paragraph; in standard
           | two-dimensional math notation it's a little easier. With a
           | pencil or a piece of chalk it's also _immensely_ easier to
           | _write_ than the English definition.
           | 
           | But the _real_ advantage is that, in about four or five lines
           | of fairly transparent formulas, you can derive it from the
           | other definition, in such a way that mistakes are, if not
           | trivial, at least reasonably feasible to spot. Doing this
           | without a computer algebra system requires writing out the
           | whole formula in different forms _four or five times_ , which
           | puts brevity at a real premium.
           | 
           | And that, _pace_ Iverson, is the power of notation as a tool
           | of thought.
        
         | caotic123 wrote:
         | You can always use let (in that our case def) to minimize the
         | problem with inlining. I think functional programmers get used
         | to big expressions because they do not inspect every symbol
         | when they are reading, the context gives enough information!
         | But you are correct, I couldn't make it better than its
         | syntax/semantics because the implementation should be also very
         | simple. And yes, PomPom is very simple compared to other proof
         | assistants (we are not dealing only with functional programming
         | but also dependent types).
        
           | saithound wrote:
           | I think you made good trade-offs regarding the syntax. Your
           | language needs to be simple in the sense that it should not
           | be complicated to parse unambiguously. But it also had to
           | remain somewhat usable.
           | 
           | I think you made a good choice: it's easy to parse, and not
           | all that difficult to read for the people who are likely to
           | use it, after some minutes of deliberate practice.
           | 
           | Sure, it's quite symbol-heavy, and it's not as clean as, say,
           | Agda or Idris. But (after a few minutes of practice) I find
           | the difficulty entirely comparable to reading Coq.
           | 
           | Unsurprisingly, people who by their own admission never use
           | functional, much less dependently typed, programming
           | languages won't find it easy to read. But nothing you can do
           | would make it easy to read for them, any more than you can
           | make Jack Sprat easy to read for those who do not speak any
           | English. To convey what it does, you'd have to write it in an
           | entirely different language, and it would lose its very
           | purpose in the process.
        
           | caotic123 wrote:
           | Also, my current work/job is using Kind as a foundation, the
           | purpose of this language is exactly what you have asked for,
           | give a check on https://github.com/uwu-tech/Kind.
        
             | jimsimmons wrote:
             | Cool! No offence, I went on a rant hoping to start a
             | discussion about FP. The fact that you have a dependently
             | typed language in 1000 lines is an amazing feat. Don't want
             | to take anything away from that. I just wish the functional
             | community took readability more seriously is all
        
               | tluyben2 wrote:
               | Maybe it is because of when you are doing this all day,
               | you really don't see this anymore? I can read j/k just
               | fine, as well as Haskell, but I find many imperative
               | languages noisy and basically ugly and unreadable. I like
               | things terse and as much as I can on 1 screen so I don't
               | have to scroll or remember things (when you get older,
               | remembering wtf something was called again gets pretty
               | annoying; very strict static typing and defining types
               | precisely (aka a zipcode or telephone number _are not_
               | strings!) together with terse functional constructs help
               | a lot; the IDE will know everything so you can focus on
               | implementation).
        
             | devit wrote:
             | Interesting!
             | 
             | Though it seems to share the problem of all current
             | dependently typed languages of not supporting efficient
             | implementation since everything has to be boxed and there
             | are no linear types.
             | 
             | So you are forced to choose between something efficient but
             | with no built-in dependent types like Rust or an horribly
             | inefficient dynamically typed languages.
        
               | zozbot234 wrote:
               | AIUI, the integration of linear types (or substructural
               | types in general) with dependent typing is still a matter
               | of research. Even "simple" type system extensions like
               | higher-kinded types come with a lot of added complexity.
               | 
               | There is also what's arguably a deeper obstacle to
               | "efficient" implementation of dependent types because
               | dependent typing does away with the phase separation
               | between compile-time and run-time. We do have "program
               | extraction" features in many DT languages to mitigate
               | this, but they're still ad-hoc additions, there isn't yet
               | a principled approach to the issue.
        
               | erichocean wrote:
               | > _So you are forced to choose between something
               | efficient but with no built-in dependent types like Rust
               | or an horribly inefficient dynamically typed languages._
               | 
               | Or...use TLA+, which doesn't have types (but you instead
               | define near-trivial _type invariants_ that the model
               | checker checks). This turns out to have a lot less
               | ceremony, while producing extremely useful results
               | quickly.
               | 
               | tl;dr TLA+ is _a lot_ more practical if you care about
               | bug-free software, and not even in the same universe in
               | terms of difficulty as something like Coq. However, Coq
               | can do fancy mathematics that TLA+ doesn 't even try to
               | do, so _both_ should exist. (Neither are easy to learn,
               | but TLA+ is much, _much_ easier than Coq.)
               | 
               | I'm familiar with Coq, and reach for TLA+ and Alloy for
               | practical programming. Coq is super-interesting, but
               | ultimately not very practical for programming today. It's
               | very nice for doing weird math stuff though. (You can't
               | do any weird math stuff in TLA+.)
               | 
               | In the end, it kind of depends where you think you'll get
               | the most bang for the buck with formal verification: can
               | you confidently write correct code, if you get the design
               | right? Or are you afraid that even with a flawless
               | design/spec, you'll still screw up the code?
               | 
               | If it's the latter, Coq (and relatives) are what you
               | want, though almost no-one uses the generated (read:
               | provably correct) code. Expect to spend years on anything
               | useful and produce at least one PhD, probably multiple,
               | in the process. It's an absolutely enormous amount of
               | work.
               | 
               | OTOH, if you're concerned that your proposed _design_
               | might have issues, then TLA+ is many, _many_ orders of
               | magnitude more useful in practice, because it can help
               | you produce a correct design with very little effort
               | (days to weeks). TLA+ helps you find specification errors
               | extremely easily, and the specs (once you get used to it)
               | are easy to write.
               | 
               | Once the design/spec has been tested to work correctly in
               | TLA+ (using a model checker), you still have to implement
               | it (e.g. in Rust), but honestly, that's straightforward
               | once the spec is correct and your mental model of the
               | problem is solid. Highly recommended.
        
               | zozbot234 wrote:
               | > tl;dr TLA+ is a lot more practical if you care about
               | bug-free software
               | 
               | TLA+ is not generally used for end-to-end checking of
               | actual program code, unlike Coq. Usually you build a
               | simplified "toy" model in TLA+ of some architecture of
               | interest, then use model checking to try and _refute_
               | some claimed properties of that model. If the model
               | checker finds a refutation of some property, then yes,
               | that 's a verified bug in your toy model. If it fails to
               | find one, that's not a proof of correctness other than in
               | very special cases where a property can be checked
               | finitely.
               | 
               | So yes, model checking is helpful in some cases, but it's
               | not a _generally_ useful tool. And there are ways to do
               | the same things in general proof assistants, e.g. by
               | interfacing them with external SMT tools. The TLA
               | language in itself is also unproblematic, it simply adds
               | modalities for time, state and non-determinism to
               | ordinary logic, and there are well-known ways to convert
               | ( "embed") those features to a form that a proof
               | assistant can work with.
        
               | erichocean wrote:
               | > _TLA+ is not generally used for end-to-end checking of
               | actual program code, unlike Coq._
               | 
               | To my knowledge, TLA+ is never used for checking program
               | code. At least, I've never heard of anyone doing it.
               | 
               | > _model checking is helpful in some cases, but it 's not
               | a generally useful tool_
               | 
               | I strongly disagree. Spending time trying to "prove" in
               | Coq an incorrect, flawed specification is an insane waste
               | of time. A model checker can show you the problems in
               | (literally) seconds.
               | 
               | Consider Raft, for instance, which was formally verified
               | in Coq. You know what they did first? Write a TLA+ model,
               | and then model check that _until they got the design
               | right_. Trying to _start_ in Coq (with an incorrect
               | design) would not have been smart.
               | 
               | Model checkers are _extremely useful_ tools.
               | 
               | ----
               | 
               | Going further, the TLA+ spec of Raft is less that 500
               | lines.
               | 
               | The Coq proof of TLA+ is ~50,000 lines. It resulted in a
               | verified implementation that to the best of my knowledge,
               | _no one uses in production_.
               | 
               | It's a nice achievement, but in terms of usefulness? TLA+
               | has Coq beat by a country mile.
        
               | zozbot234 wrote:
               | > Spending time trying to "prove" in Coq an incorrect,
               | flawed specification is an insane waste of time. A model
               | checker can show you the problems in (literally) seconds.
               | 
               | Sure, if there's anything a TLA+ model checker is good at
               | it's searching for counterexamples. But in pretty much
               | any case where such a search is computationally viable,
               | it's also easy to set it up in a proof assistant. By and
               | large, model checkers use the exact same toolset under
               | the hood as the "sledgehammer" automated proof search in
               | a proof assistant, such as SAT and SMT satisfiability
               | solvers.
               | 
               | > It resulted in a verified implementation, that to the
               | best of my knowledge, no one uses is production.
               | 
               | No one uses a TLA+ model directly in production either,
               | because that's not what TLA+ is. The question only makes
               | sense for a tool like Coq where one _can_ work end-to-end
               | with an actual implementation.
        
               | erichocean wrote:
               | > _No one uses a TLA+ model directly in production
               | either_
               | 
               | This is false. Many, many companies use software derived
               | from TLA+ models "directly in production." Amazon and
               | Microsoft (Azure) are the two biggest examples. Neither
               | relegate TLA+ usage to "R&D" on experimental system. Both
               | are using it to develop and harden actual production
               | systems used by millions (billions?) of people daily, and
               | the TLA+ toolset is used daily by normal production
               | engineers, not academics (as is true of Coq).
               | 
               | Coq is great, I like it a lot. As of 2021, it's not very
               | practical for the kinds of problems normal software
               | engineers encounter whereas TLA+ _is_.
        
               | nextos wrote:
               | What do you think about Ada Spark?
        
               | amw-zero wrote:
               | Have to jump in on a TLA+ comment, especially one about
               | how it handles "types." I say this all over the Internet
               | - the way that TLA+ handles types is the absolute ideal
               | from the programmer's perspective - there is nothing
               | special about them! No special syntax, semantics,
               | nothing. It's all the same language and logic, unlike
               | most type systems which have a completely separate type
               | language with subtly different semantics.
               | 
               | The problem is, that programmer simplicity pushes the
               | complexity to the type checking. You can't statically
               | check TLA+ types, you have to model check them which is
               | much more intensive.
               | 
               | Maybe there's a happy medium.
        
               | erichocean wrote:
               | > _You can't statically check TLA+ types_
               | 
               | I wish I could someday meet a programmer who's main
               | problem, daily, is that their code won't type check.
               | Otherwise, everything's great!
               | 
               | I like type checking, but not because the static type
               | checker helps me write correct code. (When it does, the
               | "bugs" it finds are trivial, at the same level as the
               | syntax checker.)
               | 
               | I like type checking because it makes documentation
               | easier, it makes writing code easier (code completion),
               | because it means I can write fewer comments, etc. Not
               | because of "correctness", and since that's the goal of
               | something like TLA+, I haven't (yet) found myself wishing
               | for it.
               | 
               | Especially with my experience using Coq, the way TLA+
               | handles types is a breath of fresh air. Simply lovely.
        
               | amw-zero wrote:
               | The trivial bugs that type systems are catching are in
               | the most basic type systems that we know about. Types are
               | not just what is present in Java, types are any
               | proposition that can be made about a program before
               | program execution. The propositions that a type system
               | allows you to make is a design decision as well as an
               | implementation challenge.
               | 
               | But if you look at something like F* or Idris, the types
               | allow extremely rich propositions. I think you're selling
               | types short, even though they of course are not the
               | answer to all problems.
        
       | Twirrim wrote:
       | > Pompom language is so simple that you can implement it yourself
       | just by looking in the source code (you can think that our
       | language is the BASIC language of proof assistants).
       | List           | A :: ~ * ~> * => {(list A) :: |new |empty }. //
       | A list is either a new or a empty constructor
       | 
       | Okay... BASIC is a high level language, and it's aimed at people
       | who are not involved in sciences. If your goal is to have a high
       | level syntax, and aiming it at people maybe outside of those with
       | formal proof background, I think that's a big miss. The syntax is
       | anything but BASIC, compounded by the choice of lisp.
        
         | tluyben2 wrote:
         | > the choice of lisp.
         | 
         | Where? No lisp here as far as I can see. Haskell you mean?
         | 
         | But this is BASIC and simple to follow (including the source
         | code of the language) for people interested in type systems and
         | languages, not for just anyway. If you scroll down you see then
         | the calculus with the words simple as well. This means: for
         | people interested in implementing this, it is a very simple
         | implementation and that is correct indeed. Nice work!
        
         | caotic123 wrote:
         | I think you misunderstood, I have just made an analogy with
         | BASiC. BASIC is normally a language that students used to
         | implement when they are dealing with compiler topics. The fact
         | is because BASIC is simple to learn and implement in the
         | universe of structural languages. So, what I am saying is that
         | you can do the same thing with Pompom, but of course, aiming at
         | people that have at least a little experience with functional
         | programming and type systems.
        
           | tluyben2 wrote:
           | It is nicely done; the implementation is indeed simple and
           | easy to read! It hopefully will make more language
           | implementors understand.
           | 
           | I guess comparing it to BASIC is not good as this is complex
           | matter: languages like Idris that have a vastly more complex
           | implementation but therefore also a nice (imho) syntax, are
           | still hard for most without theoretical background (I was
           | raised and educated by Dijkstra pupils so it was pounded into
           | me from very young); I found The Little Typer a good read on
           | the subject, but that might be impossible for people without
           | background as well; I cannot really estimate that.
        
         | [deleted]
        
       | bsaul wrote:
       | in practice, what kind of proof are people building when building
       | real world programs ?
       | 
       | Are people writing only top level assertion ( such as "this main
       | function always terminate") and the checker points at all the
       | gaps ?
       | 
       | Or does one has to write proofs for every single intermediate
       | layer and functions ?
       | 
       | In wich case, do people then have access to prebuilt "proven
       | functions" in a stdlib ? such as "NeverEmptyList" or
       | "AlwaysGreaterThanXVariable" ?
        
         | whateveracct wrote:
         | It depends? But it's nice to have the option.
         | 
         | DTs doesn't mean everything is proven to the max.
         | 
         | Haskell already has a proven `NonEmpty` list without DTs - you
         | can prove that with a product type alone :)
         | 
         | And that proof is useful because it allows you to have better
         | proofs down the line. For instance, the min or max of a
         | NonEmpty always exists. But the min/max of a list can be
         | Nothing.
        
         | jacobparker wrote:
         | Some examples from a Software Foundations (a series of books
         | about Coq, available online):
         | 
         | I just wrote something I called insertion sort. I want to know
         | that this is a valid implementation of sorting. What does it
         | mean to be a sorting algorithm? It means that the output is
         | sorted, and it's a permutation (shuffling) of the input. This
         | is an exercise here:
         | https://softwarefoundations.cis.upenn.edu/vfa-current/Sort.h...
         | 
         | Say I've written a Red-Black tree. I want to know that it
         | behaves like a map, or that it keeps the tree appropriately
         | balanced (which is related to performance):
         | https://softwarefoundations.cis.upenn.edu/vfa-current/Redbla...
         | 
         | One more: say you have specified a programming language, which
         | includes describing the structure of programs (the grammar
         | essentially) and "small step" semantics (e.g `if true then x
         | else y` can take a step to `x`). It would be interesting to
         | show that any well-typed halting program can be evaluated in
         | multiple steps to exactly one value (i.e. the language is
         | deterministic and "doesn't get stuck" (or, in a sense, have
         | undefined behaviour)). That's the subject of volume 2
         | https://softwarefoundations.cis.upenn.edu/plf-current/toc.ht...
         | Beyond this, you may have done a similar thing for a lower
         | level language (machine assembly, wasm, ...) and have a
         | compiler from your language to the low level language. You may
         | want to prove that the compiler "preserves" something, e.g. the
         | compiled output evaluates to the same result ultimately.
         | 
         | RE: termination, in something like Coq that is a bit special
         | because every function terminates by construction. That might
         | sound limiting but it's not really/there are ways around it. It
         | would, however, be impossible to write something like the
         | Collatz function in Coq and extract it in the obvious sense.
         | 
         | EDIT: and there are other ways these tools can (theoretically)
         | be used to work with programs, but that's a long story. There
         | have been real-world programs built like this, but it is very
         | uncommon at this time. It is an area of active research.
        
           | kragen wrote:
           | Hey, thank you very much for these pointers! SF is one of
           | those things I keep meaning to get around to. Just yesterday
           | I was trying to figure out how I'd go about proving a binary
           | search correct.
        
         | nickdrozd wrote:
         | > in practice, what kind of proof are people building when
         | building real world programs ?
         | 
         | Here's an example of a proof from a Turing machine simulator
         | written in Idris [1]. The claim is that the length of the tape
         | never decreases after taking a step.
         | 
         | The "claim" is stated in the type signature, and the "proof" is
         | an implementation of that type. That's what "propositions as
         | types" means. The whole thing looks like a regular function,
         | except that it doesn't do anything and it never gets called.
         | However, by virtue of having been accepted by the type-checker
         | it verifies the claim about that piece of the program's
         | behavior.
         | 
         | [1]
         | https://github.com/nickdrozd/rado/blob/86790bbb218785e57ea88...
        
         | amw-zero wrote:
         | Here's a talk where someone shares proofs within the sel4
         | verified OS project: https://youtu.be/AdakDMYu4lM.
        
       | isaac21259 wrote:
       | Sorry but you cannot implement this in a day. I've written my own
       | language very similar to this and from my experience it takes way
       | longer. Implementing type checking for just lambdas, pi types,
       | universes, unit, and absurd would take a day or two on it's own.
       | Not to mention Sigma types, co-product types, w-types, identity
       | types, natural numbers, and lists. You also have type inference
       | and evaluation. Also the time spent coming to grips with what all
       | of this means. It's a really cool project but I expect it would
       | take a week minimum to implement this and have a solid
       | understanding of everything you've done.
        
         | caotic123 wrote:
         | Well, half of it does not have to be necessarily implemented in
         | the core (we are not talking of complex languages like Agda),
         | but yeah, I think if you do not know anything about the PomPom
         | will take some time to finalize it. But I am pretty sure if you
         | have some knowledge about the core you can have a lot of
         | progress in one day.
        
       ___________________________________________________________________
       (page generated 2021-10-02 23:02 UTC)