[HN Gopher] Where do type systems come from? (2017)
       ___________________________________________________________________
        
       Where do type systems come from? (2017)
        
       Author : signa11
       Score  : 189 points
       Date   : 2021-09-30 03:37 UTC (1 days ago)
        
 (HTM) web link (blog.felipe.rs)
 (TXT) w3m dump (blog.felipe.rs)
        
       | lewisjoe wrote:
       | I wrote on this topic, trying to simplify where types came from
       | and why it's important -
       | https://docs.google.com/document/d/1-aLUwnN0XzLbzICnFLWfCLuD...
       | 
       | Related HN discussion:
       | https://news.ycombinator.com/item?id=21898309
        
       | lisper wrote:
       | It never ceases to amaze me how theorists can take simple
       | concepts and make them impenetrable.
       | 
       | A type is a (possibly infinite) subset of all the possible values
       | that can be provided as input to a program, or all the possible
       | results a program can produce.
       | 
       | Type theory is about trying to figure out at compile time, if the
       | inputs to a function are of a certain type (i.e. belong to a
       | certain subset) what type the output will be (i.e. what subset of
       | possible values the output will belong to).
       | 
       | It is not possible to figure this out in general because that
       | would require solving the halting problem, so the art of
       | designing type systems is choosing subsets of the data that allow
       | you to prove useful things about programs at compile time so that
       | you can find errors before the program runs, and eliminate run-
       | time checks and hence make the programs run faster.
       | 
       | The reason people who like dynamically typed languages get
       | frustrated with statically typed languages is that they value
       | development speed more than they value run-time speed or avoiding
       | of run-time errors, so they don't want to put in the extra effort
       | needed to provide the compiler with the information it needs to
       | do these compile-time derivations, or put up with the compiler
       | complaining when it is unable to do them. They just want the
       | program to run, and they're willing to pay for that with some
       | inefficiency and run-time errors.
       | 
       | That's pretty much it.
        
         | auggierose wrote:
         | Yup. Exactly that. Type theorists have coopted much of the
         | landscape of formal / mechanised mathematics, I believe much to
         | its detriment. The problem is right there in your statement:
         | Type systems are for static checks. This makes a lot of sense
         | in modern programming languages, but much less so in mechanised
         | mathematics: After all, the notion of theorem subsumes all
         | those static checks.
        
           | creata wrote:
           | > Type theorists have coopted much of the landscape of formal
           | / mechanised mathematics, I believe much to its detriment.
           | 
           | What foundation would you rather have?
           | 
           | And what would it do better / how would it be less
           | "detrimental" to formal mathematics?
        
             | auggierose wrote:
             | Good question! I believe I have the solution for this,
             | actually. I am just working out the foundations properly (I
             | call it "Abstraction Logic"), but the basic direction of it
             | is described at https://practal.com
        
               | creata wrote:
               | > the solution for this
               | 
               | But what is "this"? I think the only criticism of
               | existing theorem provers at https://practal.com is that
               | Coq uses coercions rather than subtyping. Is there a
               | practical example where Coq just doesn't have a good
               | equivalent for what subtypes can do?
        
         | Koshkin wrote:
         | The difference between a type and a set is often formulated by
         | saying that types are "intensional" whereas sets are
         | "extensional," meaning that the definition of a set imposes a
         | constraint on what objects are qualified to be set's elements,
         | while the definition of a type imposes a constraint on the
         | operations (including construction) that can be performed on
         | objects of types involved.
        
         | pyrale wrote:
         | > It never ceases to amaze me how theorists can take simple
         | concepts and make them impenetrable.
         | 
         | Yeah, it's pretty much like those people taking a reasonable,
         | standard car, and removing the motor to add horses instead. Who
         | in their right mind would do that? Turns out, plenty of people
         | in the past just did that for no reason at all.
        
           | lisper wrote:
           | I think you may have misunderstood my intent here. That
           | criticism was not directed at the OP, it was sympathizing
           | with the OP, who cited this passage:
           | 
           | "Linear type systems are the internal language of closed
           | symmetric monoidal categories, much in the same way that
           | simply typed lambda calculus is the language of Cartesian
           | closed categories. More precisely, one may construct functors
           | between the category of linear type systems and the category
           | of closed symmetric monoidal categories."
           | 
           | That is what I was referring to when I used the word
           | "impenetrable". I was talking about pedagogy, not substance.
        
         | flohofwoe wrote:
         | This "mathematical interpretation" is still far too complicated
         | for most "statically typed languages" (outside the "functional
         | niche"). You have a handful hardware-compatible integer- and
         | floating-point data types which can be combined into composite
         | data structures. The compiler checks upfront whether
         | assignments are type-compatible (which is the biggest and most
         | important difference to dynamically typed languages).
         | 
         | That's pretty much it.
        
           | eru wrote:
           | Eh, even outside the functional niche generic types are
           | pretty mainstream these days. Eg people reasonably expect
           | that their language's standard library offers a sorting
           | function that can deal with any comparable items.
           | 
           | And once you have that kind of functionality, you need more
           | than just what you described.
           | 
           | (And, no, Go's approach doesn't cut it. Their 'generic'
           | sorting interface only works for in-place algorithms.)
        
             | flohofwoe wrote:
             | Generics are mostly just a compile-time duck-typing layer
             | on top of the simple static typing model I described (with
             | the purpose of getting some of the flexibility of dynamic
             | typing back into a statically typed language).
        
               | reuben364 wrote:
               | Generics are not really the statically typed answer to
               | duck-typing. Subtyping with structural types, such as in
               | TypeScript, is.
        
         | cjfd wrote:
         | Well, the article does not actually answer 'where do type
         | systems come from?' (as in the type systems of programming
         | languages) but 'where do type theories come from?' (as in the
         | foundations of mathematics). I think type systems would have
         | been invented for programming languages even if the
         | mathematicians had never bothered with type theory, simply
         | because they are necessary. I mean treating an integer stored
         | in memory as a string or vice versa is still not going to work
         | no matter what foundations for mathematics one likes...
        
           | setr wrote:
           | JS would like to have a word with you.
        
             | cjfd wrote:
             | Yes, but would I like to have a word with JS? I am quite
             | certain the answer is 'no'.....
             | 
             | On a more serious note: JS is in the background also
             | keeping track of types, obviously.
        
         | bryanrasmussen wrote:
         | >They just want the program to run, and they're willing to pay
         | for that with some inefficiency and run-time errors.
         | 
         | or they don't make the kinds of errors that static typing helps
         | defend against so for them it is an impediment.
        
         | branko_d wrote:
         | > they value development speed
         | 
         | I never understood that. For anything non-trivial, development
         | speed is _increased_ with types: you get auto-complete, safe
         | refactorings, code navigation etc.
        
           | DeathArrow wrote:
           | Some languages even have implicit typing like C# and C++.
           | 
           | How hard is to write var v= SomeMethod() or var v = 5?
        
             | eru wrote:
             | In eg Haskell you don't have to stick `var` in front of
             | stuff, either. And it's very much statically typed, but
             | also has type inference.
             | 
             | However, you still have to sometimes do some work to make
             | your ideas fit the type system. Number of keystrokes isn't
             | the only thing here.
        
           | kajecounterhack wrote:
           | Seems to me that there's historically been more cognitive
           | overhead to _learning_ most strongly typed languages, e.g you
           | have to think about pointer vs reference, const, heap or
           | stack allocation, borrowing if it's rust, etc. If you're not
           | already familiar with the concepts, your development velocity
           | is slow. But when you start writing more code, everything you
           | said becomes true: types yield a lot of benefit.
           | 
           | People also seem to associate strong typing with compilers
           | that bog down the write->run->debug loop, some of which used
           | to have notoriously bad error messages, and things have
           | changed because of JIT compilation (dart can even be both AOT
           | compiled & JIT compiled), and also because modern compilers
           | are so much better optimized and user friendly compared to
           | just a decade ago (I mean, compare rustc error messages to
           | GCC, lol). Compilers are doing a much better job today of
           | helping decrease total debugging time.
           | 
           | FWIW this doesn't mean more advanced programmers necessarily
           | move to strongly typed languages to increase their
           | productivity. Instead, we are moving toward a progressively
           | typed world (e.g typed python, typescript) to bring the
           | benefits to users of dynamic languages as they begin to need
           | them (without forcing them to use anything). And strongly
           | typed languages are doing the opposite, e.g getting lots of
           | dynamically typed features (e.g type inference in C++ with
           | `auto`, initializer list syntax) which again, don't have to
           | be used unless it helps with code readability and/or
           | development velocity.
        
           | dan-robertson wrote:
           | The way static types can make development speed slower is not
           | by forcing you to write the same code with extra types added
           | in, but by forcing you to write different code. In this sense
           | it isn't really an argument about static types vs dynamic
           | types (imagine a language with no type checks where programs
           | have undefined behaviour at run time on anything that would
           | have been a static type error) but rather a statically typed
           | language vs a dynamic language.
           | 
           | There are patterns or language features in dynamically typed
           | languages which are very hard to translate to static types. I
           | think Peter Norvig has some slides somewhere comparing OOP
           | patterns to lisp, a more dynamic language, and that is the
           | sort of thing I'm thinking of here. For language features,
           | consider implementing an n-ary list map in Haskell vs Common
           | Lisp. Or the typed equivalent of delimited continuations.
           | 
           | That said, research on the differences between languages is
           | difficult and inconclusive. I would like to see more of it.
        
             | qsort wrote:
             | > implementing an n-ary list map
             | 
             | isn't that just flatMap?
             | 
             | > typed equivalent of delimited continuations
             | 
             | It's very clunky to do in Haskell, but conceptually they
             | can be built on top of a monadic pipeline.
             | 
             | In practice, the only case where statically typed languages
             | force you to write different code is if:
             | 
             | (a) the type system is too restrictive and dumb (no
             | generics, awkward function types, etc).
             | 
             | (b) you're trying to write a function that effectively
             | returns monster types like Union[None, int, str,
             | List[str]], which is common in dynamic languages but it's
             | something you're not really supposed to do.
             | 
             | I agree with your broader point, that static types can in
             | fact slow down development, but I have a different model.
             | 
             | Types are kind of two things at the same time: tags to tell
             | the compiler how many bytes need to be allocated, and
             | partial proofs of correctness of the program. The key
             | observation of dynamic languages is that in most cases we
             | can do without the first part: the runtime can handle that
             | in the same way it handles memory management. I believe one
             | of the mistakes of the original wave of dynamic typing
             | proponents is that they kind of threw away the baby with
             | the bathwater: the correctness part is important, and at
             | least a form of optional typing should be provided.
             | 
             | The real trouble is that it's not easy to conjure type
             | systems that are both expressive and ergonomic to use for
             | the _average_ developer. Sufficiently advanced type systems
             | are indistinguishable from math riddles, and in a working
             | environment that can be frustrating.
        
               | bopbeepboop wrote:
               | For people for whom coding is an iterative, exploratory
               | process, forcing type declarations upfront is not just
               | slower... it's a no-go.
               | 
               | There's no way to bootstrap that process for them: you're
               | asking for the output as an input.
               | 
               | I think TypeScript has one of the best compromises.
               | 
               | Also, I think people are understating how easy dynamic
               | typing makes substitutions -- everyone else is writing
               | their class twice (interface + class) just to get the
               | same flexibility.
        
               | qsort wrote:
               | > For people for whom coding is an iterative, exploratory
               | process, forcing type declarations upfront is not just
               | slower... it's a no-go.
               | 
               | Is there any kind of programming task where it's too much
               | to ask the programmer to know the _type_ of the function
               | he 's writing?
               | 
               | The only place I can imagine types being a burden is in a
               | REPL (it's just boring to write them every time), but
               | even then _optional_ types like Python and TS are an
               | unequivocal win (you 're presumably going to move your
               | code out of the notebook/repl and into proper source
               | files at some point).
               | 
               | > understating how easy dynamic typing makes
               | substitutions
               | 
               | Yeah so easy they're easily wrong. One of the worst
               | problems of dynamic languages is that they don't have the
               | concept of an interface, which makes it _harder_ and not
               | easier to write generic code.
               | 
               | > everyone else is writing their class twice (interface +
               | class)
               | 
               | That's caveman Java right there. Interfaces with defaults
               | (Java, Kotlin, even C# has something similar IIRC) can
               | easily emulate dynamic patterns like mixins while
               | preserving type safety and without the diamond problem.
        
               | girvo wrote:
               | Genuine question: what does old PHP's (not the newer
               | stronger versions) interfaces and "type" hints count as?
               | It's very much a dynamic programming language from what I
               | remember of it, but interfaces were crucial to it when I
               | was using it in anger.
               | 
               | Is it one of those in-between systems, just more on the
               | dynamic side?
        
               | qsort wrote:
               | I have never used PHP professionally, but if you're
               | talking about PHP 5 style interfaces, they are definitely
               | a form of type checking, and they were introduced with
               | the specific purpose of facilitating object oriented
               | programming in PHP.
               | 
               | The point I'm getting at is that using interfaces lets
               | you make guarantees about what methods the objects must
               | have. I'm not really picky about what specific technical
               | means are being used to enforce those, but passing an
               | object of the wrong type to a method is an entirely
               | avoidable category of errors, and I prefer them to be
               | flagged by the
               | compiler/interpreter/runtime/tsc/mypy/whatever as soon as
               | possible.
               | 
               | IME well designed interfaces are even more important than
               | well designed classes, and dynamic languages
               | unfortunately make it harder to enforce interfaces. It's
               | not by chance that the first thing Typescript did was to
               | introduce interfaces.
        
               | nicoburns wrote:
               | > Is there any kind of programming task where it's too
               | much to ask the programmer to know the type of the
               | function he's writing?
               | 
               | I think the problem is that lots of type systems are
               | inherently limited, so there may be patterns that you
               | want to express that your type system simply can't, or
               | that you have to write tons of boilerplate code to
               | express. In languages like Java or C# that don't have Sum
               | types, this can be as basic as "this value is a String or
               | Integer". You have to write a class hierarchy to express
               | this in Java. In JavaScript you simply accept a parameter
               | and branch on `typeof value`. More powerful type systems
               | allow you to express more patterns, but you still have to
               | type them out. This can be done, but if you're doing
               | exploratory work an unknown dataset then this can make
               | the work _much_ slower. And there be may little benefit.
        
               | qsort wrote:
               | You're just vehemently agreeing here, that was in fact my
               | original point.
        
               | pjmlp wrote:
               | In C# you just declare them as dynamic if you wish to
               | emulate the ways of JavaScript.
               | 
               | People keep forgetting Basic is dynamic and .NET was
               | designed to support it as well, additionally it got
               | improved with DLR classes to support IronPython and
               | IronRuby projects.
               | 
               | So you can co type crazy in C#, F# and C++/CLI, or just
               | get hold of those dynamic features.
        
               | nicoburns wrote:
               | Oh sure. But at that point you are using dynamic typing.
               | My dream language is one that allows for full gradual
               | typing, with the ability to opt functions in (opt out
               | could also work) to strict type checking which gives you
               | the safety and performance improvements where you want
               | them, but allows you the flexibility of dynamic typing
               | where you don't. PHP-style runtime type checks would be
               | automatically generated at the boundaries between
               | dynamically typed and statically typed functions.
        
               | zozbot234 wrote:
               | One problem with gradual typing is that it has comparable
               | performance overhead to dynamic typing: You can only
               | optimize out dynamic type checks, dispatching, marshaling
               | etc. if you have sizeable, self-contained portions of
               | your program (i.e. modules) that only use static types!
               | So the "graduality" of it goes mostly unused in practice.
               | 
               | It can be a useful pattern when designing interface
               | boundaries across modules (e.g. ABI's, data interchange
               | formats etc.) to allow for the required flexibility in a
               | limited context where the overhead isn't going to matter
               | all that much.
        
               | nicoburns wrote:
               | Performance is only half the point. The other half is
               | correctness. And I think this pattern of "important
               | correctness and performance sensitive core, surrounded by
               | simple but flexible glue code" is common enough that such
               | a language makes sense. It describes pretty much all of
               | the code that I write at work in fact. Being able to
               | simply add type annotations and have the compiler
               | generate the runtime type checks would be wonderful.
        
               | bopbeepboop wrote:
               | > Is there any kind of programming task where it's too
               | much to ask the programmer to know the type of the
               | function he's writing?
               | 
               | Yes -- that's exactly what I said.
               | 
               | Some programmers create code by iteratively refining the
               | types involved in short loops. Insisting that each of
               | these loops update typing is simply providing a broken
               | tool to people who code in that style.
               | 
               | > That's caveman Java right there.
               | 
               | Or you know, production corporate code -- eg, AWS-CDK
               | uses the IType/Type split to allow substitution of types
               | into functions that "just happen" when you can use
               | dynamic typing.
               | 
               | https://github.com/aws/aws-cdk
               | 
               | Typeclasses are great; dynamic type classing is better
               | for exploration-style coding, eg dynamically adding
               | members in Python.
        
               | dan-robertson wrote:
               | n-ary list map means a function that does map or map2 or
               | map3 or .... (Where map2 is zipWith, not the one you get
               | from the list monad.)
               | 
               | In Common Lisp the lambda-list looks something like:
               | (MAPCAR func list &rest lists)
               | 
               | And a slow implementation might look like:
               | (defun mapcar1 (f list &aux r)         (tagbody
               | g           (when list             (push (funcall f (pop
               | list)) r)             (go g))           (nreverse r)))
               | (defun every (pred list)         (dolist (x list)
               | (unless (funcall pred x)             (return nil)))
               | t)       (defun mapcar (f list &rest lists)         (prog
               | ((lists (cons list lists))               r)          loop
               | (when (every #'consp lists)             (push (apply f
               | (mapcar1 #'car lists) r)             (setf lists (mapcar1
               | #'cdr lists))             (go loop))           (nreverse
               | r)))
               | 
               | To implement such a function in ocaml you need to do
               | something like:                 type ('ftype,'output) t =
               | | [] : ('a,'a) t         | (::) : 'a list * ('b,'c) t ->
               | ('a -> 'b,'c) t              val mapn : f:'f -> ('f,'x) t
               | -> 'x
               | 
               | In haskell you would use a heterogenous list and type
               | families. But note that I didn't implement mapn in ocaml
               | as it would be hard and unreadable...
               | 
               | One nice quality of dynamic languages is that you get
               | very useful objects like lists or python/JavaScript style
               | 'dicts' and lots of standard library functions to operate
               | on them. You can implement them in a statically typed
               | language to some extent but they are much less versatile
               | and less first class. It means you often need to write
               | different functions that do basically the same thing
               | because they use different record or tulle types as
               | input. I don't think any mainstream languages are very
               | close to static typing with structural subtyping.
        
           | Cthulhu_ wrote:
           | > For anything non-trivial
           | 
           | There's the key phrase. There's plenty of people that can
           | whip up a POC or MVP in an afternoon of hacking or a week of
           | fiddling, but going from that to something that can safely
           | handle millions of users and hundreds of developer
           | contributions needs types and other assurances. If it fits in
           | a developer or a team's heads, then it's just fine - that's
           | how I did my first major JS application, just Knowing what
           | fields my objects have and what types they are expected to
           | have. But that doesn't scale, people move on, gaps exist in
           | tests, etc etc etc.
        
           | Someone wrote:
           | Historically, in many languages, you didn't get either of
           | these. If you were lucky, you would have ctags
           | (https://en.wikipedia.org/wiki/Ctags) for navigation.
           | 
           | Also, development speed can go down if you have to show your
           | compiler that data conforms to a given type. Certainly for
           | one-offs where the user writes and maintains their own
           | program, that can sometimes/often get you something doing
           | something useful for the inputs you have _now_ fairly
           | rapidly.
        
           | roenxi wrote:
           | > ...development speed is increased with types...
           | 
           | There might be a hidden assumption of stead-state in there.
           | In the initial ramp-up when people are learning a language,
           | or learning to code to start with, types will slow down
           | development because there is one more system to learn and use
           | correctly.
           | 
           | It is notable that if your code is going to be a bug ridden
           | mess regardless of language features, language features that
           | reduce the number of bugs (like type systems) are a
           | hindrance.
        
           | mro_name wrote:
           | maybe you get the sunshine case out of the door quicker. But
           | beware new-year or leap years. Or daylight saving change. Or
           | non-ascii.
           | 
           | I mean, who would ever care about fringe cases when you can
           | have continuous delivery instead?
        
             | DeathArrow wrote:
             | Well, if it works when you deploy, maybe you get lucky and
             | the bug will arrive after you've moved to another company.
             | :D
        
             | goto11 wrote:
             | I don't see how this is different between type systems. No
             | type system I know of guarantee you handle leap years
             | correctly.
        
               | mro_name wrote:
               | in real life nobody will guarantee you anything.
               | 
               | But treating dates and surnames alike doesn't help
               | robustness either.
               | 
               | Or separate types for metric distance and imperial
               | distance might have prevented mars(?) lander losses. Or
               | for sea levels referenced to Rotterdam or Le Havre might
               | have prevented some bridges to have differing levels.
        
               | mro_name wrote:
               | I apologise for the misleading example "leap year".
        
               | goto11 wrote:
               | I'm a fan of static typing, but if people think it will
               | protest them from logical errors like forgetting about
               | DST, then they are actually worse off.
        
               | reuben364 wrote:
               | I guess; in the sense that it frees cognitive load from
               | reasoning about properties of your program that are
               | easily computed to think about properties that aren't.
               | Even then, maybe thinking about the former primes you to
               | reason about the latter. But even if the compiler tells
               | you what the type of something should be, you ultimately
               | have the synthesize something of that type.
        
               | mro_name wrote:
               | think of the metric/imperial example above. The distance
               | isn't just a number. Holds for most other things, too.
               | Typed is key. Be it either static or dynamic.
        
           | tshaddox wrote:
           | I think it's mostly a historical coincidence that popular
           | languages with conspicuous static type checking have often
           | been slower to develop with than popular languages without
           | conspicuous static type checking.
        
         | twelvechairs wrote:
         | Really good write up. I'd add a couple of touch points:
         | 
         | - The art of designing type systems you describe is also
         | heavily constrained by the practical limits of language. We
         | could easily add more and more 'types' to a programming
         | environment to cover more use cases, but at some point it makes
         | the 'language' difficult when there's too many. Which is why
         | turn to different languages for different uses.
         | 
         | - There's a bunch of different categories of types. This thread
         | is full of discussion of functions and lambda calculus which is
         | important as function types are some of the most complex,
         | especially for 'trying to figure out at compile time' as you
         | describe. But there's also common types like a 1 byte char, 4
         | byte unsigned int, or memory pointer that are more based on
         | hardware and established standards, as well as user defined
         | types and 'classes' which bring their own practical complexity
         | and approaches.
        
         | jasode wrote:
         | _> A type is a (possibly infinite) subset of all the possible
         | values that can be provided as input to a program, or all the
         | possible results a program can produce. [...] That's pretty
         | much it._
         | 
         | I've often seen this reductionist simplification of "type" but
         | I don't think it's accurate because of the emphasis on _values_
         | without mentioning the _rules for those values_.
         | 
         | The extra power of "types" comes from _behaviors_ of values.
         | 
         | E.g. in mathematics example, a Complex Number when only seen as
         | "values" looks like a 2-dimensional vector. Therefore, r^2 and
         | complex plane can appear as the _same type_ but they are not.
         | Complex Numbers _have extra behavior rules_ e.g. multiplication
         | that plain r^2 does not.
         | (https://math.stackexchange.com/questions/444475/whats-the-
         | di...)
         | 
         | For a less mathy example, would be a typical DateTime type.
         | Yes, some underlying implementation may use a 64bit integer as
         | an offset from an epoch but a rich datetime type would also
         | have "behavior rules" such as adding 1 day to Saturday _makes
         | it wrap around back to Sunday_ because of modular arithmetic.
         | It 's the built-in behavior rules that make using a DateTime
         | type _less error-prone_ than manipulating raw 64bit integers.
         | 
         | Personal anecdote: I initially learned an _incomplete and
         | misleading idea_ about  "types" from K&R's _" The C Programming
         | Language"_. From that book, I only thought of "types" as
         | different data widths (or magnitudes). An "int" held 16bits and
         | a "long" held 32 bits, etc. And C keywords like "typedef"
         | futher adds to the misunderstanding of "types" because typedef
         | is just an _name alias_ instead of defining _behavior rules_.
        
           | chakkepolja wrote:
           | Complex multiplication can be thought of as just a different
           | function than real multiplication, defined *: C x C -> C or
           | something like that, right?
           | 
           | > rich datetime type would also have "behavior rules" such as
           | adding 1 day to Saturday makes it wrap around back to Sunday
           | because of modular arithmetic.
           | 
           | Conceptually, it's a overloaded method, that's a function,
           | again, with syntactic sugar. Functions are defined from one
           | set to other. The fact that we use same names or symbols
           | doesn't change that types, for all simple purposes,
           | correspond to sets in math.
        
         | a_imho wrote:
         | 1. Reasoning about a subset of values is usually easier.
         | 
         | 2. Modeling the domain with types, making erroneous states
         | unrepresentable is helpful
         | 
         | 3. Carefully chosen types help with communicating intent
        
         | 4ad wrote:
         | You are describing types as sets, and in intensional type
         | theory types are not sets. There is also extensional type
         | theory, where types can be sets, but in general we are not
         | talking about that when we speak about type theory in
         | connection to computer science.
         | 
         | But regardless of ITT vs. ETT, the point of type theory is that
         | proofs are mathematical objects. That's where the real power
         | comes from and this is the power unityped languages lack.
         | 
         | I suspect that this false belief that dynamically-checked
         | language afficionados have, of types being sets, is responsible
         | for their disdain for statically-typed languages. If you think
         | that types are sets, it's easy to miss what they can do.
        
           | lisper wrote:
           | What is "intensional type theory"? Did you mean
           | "intentional"? I know what intentional logic is, but I've
           | never heard of intentional type theory before (and neither
           | has wikipedia).
        
             | 4ad wrote:
             | At the risk of more confusion
             | https://ncatlab.org/nlab/show/intensional+type+theory.
             | 
             | As a gentle introduction to modern type theory I would
             | recommend https://arxiv.org/abs/1601.05035.
        
         | TeMPOraL wrote:
         | I have a question that's been on the back of my mind for a
         | while: why just types? Aren't there other things we should be
         | able to tell the compiler so it enforces correctness better? Or
         | a different way to tell those things? What I've seen so far
         | are:
         | 
         | - Type systems of varying complexity, where you sometimes have
         | to abuse the language in complicated ways to express
         | constraints you want;
         | 
         | - Pushing complex checks to runtime, even though they should be
         | - in principle - computable ahead of time. Or, the compiler
         | should be able to tell you ahead of time what you need to do to
         | ensure an assertion never fails.
         | 
         | - External tools like proof checkers.
         | 
         | I guess having types as names of constraints is useful. Then,
         | what I'm looking for is a better "satisfies" from Common Lisp -
         | one that would be guaranteed to run mostly arbitrary code at
         | compile time.
        
           | eru wrote:
           | > Aren't there other things we should be able to tell the
           | compiler so it enforces correctness better?
           | 
           | Well, programming language people tend to call any kind of
           | property you can statically check a 'type'.
        
             | Koshkin wrote:
             | There's static asserts.
        
           | lisper wrote:
           | That's a great question!
           | 
           | It turns out that sets are a very general concept. Anything
           | you can say about a program can be expressed in terms of sets
           | of states that program goes through as it runs. So there is
           | nothing you can say about a program that cannot be expressed,
           | at least in principle, in terms of sets.
           | 
           | The problem is that sets are themselves Turing-complete. If
           | you can add a member to a set, and remove a member from a
           | set, and test if a set is empty, that is a Turing-complete
           | set of primitives. So if you allow yourself to have a type
           | system that is so powerful as to be Turing-complete you
           | haven't gained anything because now the problem of debugging
           | problems in your type system is equivalent to that of
           | debugging problems in your programming language.
           | 
           | So in order to be useful, a type system has to be constrained
           | in some way that prevents it from being Turing-complete.
           | Choosing those constraints is the art of designing type
           | systems. No matter what you do, it will necessarily be the
           | case that if your type system is going to be useful at all
           | there will be certain things it cannot express. The trick is
           | to insure that those things are things you don't actually
           | care about very much or, alternatively, that your language
           | allows you to "break out" of the constraints of the type
           | system when they become too burdensome and tell the compiler
           | to just do what you tell it despite the fact that it can't
           | prove there won't be errors.
        
             | Koshkin wrote:
             | > _sets of states that program goes through_
             | 
             | I wonder if this is true of a program written in a purely
             | functional language.
        
           | sabas123 wrote:
           | This is a really good question!
           | 
           | Basically a lot of properties that you probably can think of
           | that could be checked for but aren't expressible in Java
           | types system can still be expressed using types in another
           | system. Therefore if you want something more powerful than
           | the types your using, use more powerful types! It turns out
           | that we do not really need more to reach the limits of what
           | we can check for automatically.
        
           | capitol_ wrote:
           | Lifetimes in rust fits this, they are sort of types.
        
           | shikoba wrote:
           | If you're interested, there is Calculus of constructions that
           | can express anything.
        
         | goto11 wrote:
         | I think this is too simplistic for real-world programming
         | languages.
         | 
         | For example TypeScript has the "any" and the "unknown" types.
         | Both encompass the same set of values (any possible value which
         | can be represented in the language), but the allowed operations
         | are different.
         | 
         | C# has something similar with "object" and "dynamic".
         | 
         | For a static type system it makes more sense to think of a type
         | as a set of supported operations rather than a set of values.
        
           | louthy wrote:
           | > For a static type system it makes more sense to think of a
           | type as a set of supported operations rather than a set of
           | values.
           | 
           | It really doesn't. 'any', 'unknown', 'object', and 'dynamic'
           | are all 'top' in type systems (the type that is the super-
           | type of all other types), they hold the set of all values.
           | The fact a language assigns additional semantics based on
           | their name is just sugar over the underlying type system, the
           | type-system can live quite happily without names.
           | 
           | But even then, you could consider the name of a type a
           | 'discriminating tag', like in discriminated unions. So,
           | something like:                  type Top = Any a
           | | Unknown a                 | Object a                 |
           | Dynamic a
           | 
           | The same type, but with different states depending on the
           | semantics of the language.
        
             | goto11 wrote:
             | I don't know what you mean with "sugar", but the fact that
             | several different types can represent the exact same set of
             | possible values means that a type cannot be considered as
             | just a set of possible values.
             | 
             | > The same type, but with different states depending on the
             | semantics of the language.
             | 
             | This is just playing games with definitions. What you call
             | "state" is a type in the type system of the programming
             | language.
        
           | philh wrote:
           | Or Haskell has types like `Void`, `Maybe`, and `Monad`. If
           | you think of them as "sets of possible values", these all
           | have no possible values. But they're _very_ different types.
           | 
           | You could probably define terms such that of these, only
           | `Void` is a "type" - for example, it's the only one that you
           | can use directly as function inputs or outputs. (i.e. you can
           | write a function `Void -> Int` but not a function `Maybe ->
           | Int`.) I think that would be nonstandard, but maybe
           | reasonable to do. But then when you explain what a "type" is
           | you've only explained a subset of what "type theorists" talk
           | about, which includes the others.
        
             | lkitching wrote:
             | Maybe and Monad are not types - Maybe is a type constructor
             | and Monad is a type class.
        
               | louthy wrote:
               | `Maybe a` is a type:
               | 
               | For a start it's a discriminated union (a sum type), its
               | set of values is:                   a + 1
               | 
               | `a` is the values in the `Just` case, and 1 is the
               | `Nothing` case. It's obvious why they're called sum-types
               | when you view it like this.
               | 
               | `Maybe a` is a type-lambda abstraction (still a type)
               | 
               | `Maybe Int` is a type-application, passing Int to the
               | `Maybe a` type-lambda as an argument, to get a `Maybe
               | Int` in return (still a type).
        
               | lkitching wrote:
               | > `Maybe a` is a type-lambda abstraction (still a type)
               | 
               | I don't know what you mean by this - lambdas are values
               | not types. You could argue the type of the Maybe type
               | constructor is Type -> Type but there is no Type type in
               | Haskell so Maybe is not a 'type' in the Haskell type
               | system in the way e.g. String is. The separation of types
               | and values is AFAIK the defining characteristic of non-
               | dependent type systems, which Haskell is (for now).
        
               | philh wrote:
               | As I said, I think that's a reasonable but nonstandard
               | way to define the word "type" but it doesn't make much
               | difference - type theory concerns itself with them,
               | whatever they're called.
               | 
               | But to support my "nonstandard" claim:
               | 
               | * https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/c
               | ompil... mentions "Types (possibly of higher kind); e.g.
               | [Int], Maybe"
               | 
               | * https://limperg.de/ghc-extensions/#polykinds talks
               | about "types of different kinds", using Int and Monad as
               | examples.
               | 
               | * https://en.wikipedia.org/wiki/Kind_%28type_theory%29
               | says "the type [] (list) is a type constructor". It uses
               | the terms "data type" and "proper type" for what I think
               | you simply want to call "type".
               | 
               | It's also possible usage is just inconsistent on the
               | subject. (Evidence for this: that second link calls
               | constraints "type-like things", when they're just types
               | according to me.) But my sense is that usually, people
               | engaging seriously with type theory or advanced Haskell
               | would call Maybe and Monad types.
               | 
               | (You're of course right that Maybe is a type constructor
               | and Monad is a typeclass, but that doesn't mean they
               | aren't also types.)
        
         | DougBTX wrote:
         | > more than they value run-time speed
         | 
         | Maybe this is a dated perspective now, since more modern
         | languages like Julia are aiming for dynamic types with good
         | run-time speed, or like nim which aim to make statically typed
         | languages as easy to use as dynamic ones.
        
           | cb321 wrote:
           | I use Nim, C, Python/Cython, Shell, etc., etc. After a short
           | while Nim felt to be the fastest/most fluid development time
           | frame of the whole bunch - while also having a strong static
           | typing style..I find it a very "get to the point" programming
           | language/style.
           | 
           | Sure, sure. Nothing is perfect. It could use stronger REPLs,
           | one has to workaround compiler limitations occasionally, and
           | I seem to be the only one telling people to use the TinyC/tcc
           | backend as a default in their $HOME/.config/nim/nim.cfg or
           | $HOME/.config/nim/config.nims for sub-second compile-times
           | (part of "fluid"). It is very good as-is and deserves more
           | attention than it gets.
        
         | eru wrote:
         | Be careful, that's just one way of looking at things. See
         | https://lispcast.com/church-vs-curry-types/ or
         | https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus#I...
         | 
         | > A type is a (possibly infinite) subset of all the possible
         | values that can be provided as input to a program, or all the
         | possible results a program can produce.
         | 
         | What you are describing are 'extrinsic types. There are also
         | 'intrinsic types'.
         | 
         | Btw, types don't have to be sets at all. (Nor are they all
         | isomorphic to sets. Some types describe things bigger than
         | sets.)
        
           | nicoburns wrote:
           | For anyone trying to understand the difference between
           | extrinsic and intrinsic definitions of sets, consider the set
           | of birds.
           | 
           | Intrinsic definition:
           | 
           | "Birds are a group of warm-blooded vertebrates constituting
           | the class Aves, characterised by feathers, toothless beaked
           | jaws, the laying of hard-shelled eggs, a high metabolic rate,
           | a four-chambered heart, and a strong yet lightweight
           | skeleton."
           | 
           | Extrinsic definition:
           | 
           | "Birds are the set made up of: Sparrows, Pigeons, Robins,
           | Hummingbirds, Hawks, Eagles, Parrots, Penguins, Ducks,
           | etc..."
        
             | eru wrote:
             | Hmm, I'm not sure anyone who doesn't already know will be
             | helped by this analogy?
             | 
             | Though not sure whether Wikipedia does a better job:
             | 
             | > An intrinsic/Church-style semantics only assigns meaning
             | to well-typed terms, or more precisely, assigns meaning
             | directly to typing derivations. This has the effect that
             | terms differing only by type annotations can nonetheless be
             | assigned different meanings. [...] In contrast, an
             | extrinsic/Curry-style semantics assigns meaning to terms
             | regardless of typing, as they would be interpreted in an
             | untyped language.
             | 
             | > The distinction between intrinsic and extrinsic semantics
             | is sometimes associated with the presence or absence of
             | annotations on lambda abstractions, but strictly speaking
             | this usage is imprecise. It is possible to define a Curry-
             | style semantics on annotated terms simply by ignoring the
             | types (i.e., through type erasure), as it is possible to
             | give a Church-style semantics on unannotated terms when the
             | types can be deduced from context (i.e., through type
             | inference). The essential difference between intrinsic and
             | extrinsic approaches is just whether the typing rules are
             | viewed as defining the language, or as a formalism for
             | verifying properties of a more primitive underlying
             | language. Most of the different semantic interpretations
             | discussed below can be seen through either a Church or
             | Curry perspective.
        
         | ridiculous_fish wrote:
         | Static types become less useful as the runtime truth diverges
         | from your static model. For example, different browsers have
         | different APIs. Your code may type check against your browser
         | today, but that doesn't mean it will work correctly in the
         | browser from yesterday or tomorrow, or in other browsers.
         | Dynamic types let you smooth over these differences.
         | 
         | That's an argument for dynamic types, not against static types,
         | and I think languages with both (TypeScript, ObjC) are a sweet
         | spot.
        
         | [deleted]
        
       | pyrale wrote:
       | People interested in this topic might also enjoy this video by
       | Philip Wadler describing the way logicians and computer
       | scientists converged towards equivalent ideas.
       | 
       | https://www.youtube.com/watch?v=IOiZatlZtGU
        
         | vishnugupta wrote:
         | This is a _fantastic_ talk. I 've watched a couple of different
         | versions of this.
         | 
         | Towards the end he spoke about something in passing that's
         | stayed with me.
         | 
         | Through out early/mid 20th century different mathematicians
         | created their own theories of mechanised computation. Turing
         | machine, Lambda calculus, SK combinators etc., However over
         | they all turned out to be equivalent; for example Church-Turing
         | thesis. So this leads one to believe strongly that mechanised
         | computation is "discovered" rather than invented. Also it is a
         | good indication that the computing theory is on a solid
         | foundation.
         | 
         | However, when it comes to distributed and parallel computing
         | theories have been proposed but no equivalence has been found
         | (or at least proved). It's an active field which he hopes will
         | result in a similarly solid foundation for the distributed
         | computing. At the moment it seems to be on a shaky ground.
        
           | kenjackson wrote:
           | What is a distributed or parallel computing theory in this
           | context? They can't compute anything above and beyond
           | sequential computers.
        
           | felixyz wrote:
           | This three-part series of (long) posts is a very thorough and
           | fascinating counterpoint to the idea that computation was
           | discovered and that there is anything surprising about the
           | confluence of lambda calculus, Turing machines etc, mostly
           | using primary sources from Leibniz onwards:
           | 
           | "Finite of Sense and Infinite of Thought: A History of
           | Computation, Logic and Algebra"
           | 
           | https://pron.github.io/posts/computation-logic-algebra-pt1
        
           | zozbot234 wrote:
           | > However, when it comes to distributed and parallel
           | computing theories have been proposed but no equivalence has
           | been found
           | 
           | Is this really the case? It seems that game semantics ought
           | to be able to model these concerns. The logical side of it
           | would then be some sort of multi-modal logic.
        
       | cryptica wrote:
       | I like Mathematics but I hate the language of Mathematics.
       | 
       | Mathematicians chose to sacrifice readability in favor of
       | terseness; they invented a large number of abstractions, often
       | with only tiny differences between them. Also, descriptions of
       | concepts tend to focus on the wrong characteristics... Maths
       | would be far more accessible if mathematicians described
       | everything in plain English and reused abstractions to describe
       | similar concepts instead of inventing entirely new words. It's
       | possible, they just don't want to do this because it wouldn't
       | make them sound as clever.
       | 
       | As a developer, I can explain a blockchain as:
       | 
       | 1. A decentralized ledger which relies on an asymmetric signature
       | scheme such as EdDSA, zero-knowledge proofs or other mechanism to
       | authenticate on-chain messages. A blockchain network forms either
       | a structured (deterministic) or unstructured (stochastic) P2P
       | mesh, ideally with each node having a partial view and
       | propagating messages through the network via a gossip protocol.
       | Blockchain nodes adhere to a consensus mechanism such as PoW or
       | PoS (or variants) to provide byzantine fault tolerance and
       | resilience to forks given certain finality conditions.
       | 
       | Or:
       | 
       | 2. A ledger which is replicated across multiple nodes/computers
       | which are connected together. The blockchain is usually a ledger
       | of accounts involving financial transactions. The authenticity of
       | each transaction can be independently verified by anyone by
       | checking that a digital signature on any transaction corresponds
       | to the account which claims to have sent the transaction - This
       | is possible because each account is associated with a 'public
       | key' which anyone can see and which can be used to verify that a
       | signature was generated by the sender since only the sender could
       | have known the 'private key' (secret) which was used to generate
       | this specific signature given this specific transaction and this
       | specific public key.
        
         | Jtsummers wrote:
         | > Maths would be far more accessible if mathematicians
         | described everything in plain English
         | 
         | Are you sure about that?
         | 
         | Take Kepler's Laws of Planetary Motion as an explanation of
         | orbital mechanics and try and do anything with just these
         | plaintext definitions (courtesy of Wikipedia):
         | The three laws state that:            1. The orbit of a planet
         | is an ellipse with the Sun at one of the two foci.       2. A
         | line segment joining a planet and the Sun sweeps out equal
         | areas during equal intervals of time.       3. The square of a
         | planet's orbital period is proportional to the cube of the
         | length of the semi-major axis of its orbit.
         | 
         | Now try and apply this without ever falling into mathematical
         | notations. And translate this into every language under the Sun
         | unless you really want it to only be comprehensible to English
         | readers.
         | 
         | Or take this formula, converted into plain English:
         | The distance of a planet from the sun is proportional to the
         | semi-latus rectum and one more than the product of the
         | eccentricity of the orbit and the cosine of the angle between
         | the planet's current position and its nearest approach.
         | 
         | To help out since at least one term in there may be unfamiliar
         | to most readers (courtesy of Wikipedia):                 The
         | latus rectum is the chord parallel to the directrix and passing
         | through a focus; its half-length is the semi-latus rectum ().
         | 
         | Now, take _that_ and suppose we have the distance from the sun
         | and all the other values, but not the angle, rephrase _that
         | abomination_ so that we can solve for the angle. Without using
         | mathematical notation. The only good thing about dropping
         | mathematical notation is that we 'd be able to slow down
         | technological process substantially as we wouldn't even be able
         | to get to orbit anymore.
         | 
         | That idea of "solve for th" is "just" a basic application of
         | algebra. An act that would become impossible to do with any
         | kind of reasonable efficiency and confidence about correctness
         | without something like our current mathematical notation, and
         | plain English (very anglocentric, guess we don't want anyone
         | outside the anglosphere to participate) would be a poor
         | alternate.
        
         | Koshkin wrote:
         | Indeed, mathematical notation is often used (and abused) where
         | plain language would suffice. This is so widespread, in fact,
         | that one might think that there is a conspiracy, or a contest,
         | to come up with as unreadable a text as possible. On the other
         | hand, the (original) purpose of mathematical notation, in its
         | condensed form, is facilitating and, in many cases, automating
         | reasoning and calculations on a piece of paper. (You wouldn't
         | want do perform long division using words, believe me.) Also,
         | the plain language may bring the possibility of an uncertainty
         | in meaning and/or misinterpretation, so a definition or a
         | theorem stated using notation instead often serves to minimize
         | that possibility.
        
         | tel wrote:
         | Mathematical papers are essentially mathematicians shipping
         | pseudo code back and forth. There are good reasons for it, but
         | it's not meant for most audiences.
         | 
         | Mathematicians absolutely do write accessibly from time to
         | time, but when they do so you have to be fortunate enough to be
         | in the audience. Often these are aimed at only the most general
         | of audiences and thus are pretty sparse in detail.
        
       | tel wrote:
       | Russell's Hierarchy of Types isn't particularly where type
       | theories came from, though it was an early reference to the word
       | and obviously very adjacent to these developments. It is _a_ type
       | system employed to solve a particular problem, though.
       | 
       | Types have existed for a long time as mechanisms to provide
       | classification and structure to formal (and even informal)
       | languages. Even just before Russell, Frege (and many of his
       | contemporaries) made a huge deal of the fine distinctions in type
       | between propositions, expressions of truth, proofs, and their
       | ilk.
       | 
       | What really made type theories blow up was the recognition that
       | they corresponded with these very things: propositions,
       | judgements, proofs. Mathematicians developed type theories as
       | follow-on to Russell's (and others) work because they wanted to
       | better describe what kinds of things counted as valid
       | propositions and what kinds of corresponding proofs were formally
       | admissible. They also were critically tied to conversations of
       | syntax and semantics where mathematicians asked hard questions
       | about why funny symbols on pages ought to have any sort of force
       | of knowledge whatsoever.
       | 
       | So what made types rich was a recognition of the richness in the
       | structure of first- and higher-order logics. The recognition that
       | the power of the semi-formal language of proofs employed by
       | working mathematicians required sophisticated study in order to
       | describe and formalize. And that doing so required a distinction
       | between the object of mathematical work, the proof, and the
       | intent or semantics, the judgement.
       | 
       | Early PL type systems seem to have been convergent evolution.
       | Here we were as computer scientists making and operating new
       | formal languages and needing to describe certain properties of
       | the syntax (but really: its ultimate semantic weight as it is
       | operated by a machine) so that we could, e.g., make sure we used
       | properly sized registers. Only as more sophisticated programming
       | languages were developed that desired to simply the supposed
       | programmer's task of working with abstraction did PL designers
       | begin to converge toward, steal, and ultimately influence these
       | languages of logic. The convergence, in my opinion, occurred
       | because there truly are significant similarities in the kinds of
       | mental processes accomplished by logicians and by programmers.
       | 
       | So where do type systems come from, really?
       | 
       | I think the good ones represent natural, parsimonious structuring
       | of logic. We land on compositional systems because they're easy
       | to reason about inductively. We land on sums and products because
       | of archetypes from boolean logic, our belief about the
       | "cheapness" of copying knowledge, and because of the natural urge
       | to package together and make choices. We land on
       | modules/quantifiers due to the value of abstraction in
       | compartmentalizing and reusing ideas.
       | 
       | We can relax and challenge some of these assumptions to come up
       | with new sorts of logic (and many PL designers and logicians make
       | their bread in this way). We can also imagine totally different
       | approaches to logic (like: state machine transition matrices)
       | which are more challenging for us to work with using human minds
       | or communicate to others using human speech. In this way, our
       | type theories and logics are tailored to how humans most
       | comfortably perceive the world.
        
         | brabel wrote:
         | > In this way, our type theories and logics are tailored to how
         | humans most comfortably perceive the world.
         | 
         | Is there any field of study where the whole point is NOT to
         | make concepts understandable by humans?
         | 
         | It could be interesting to let AI, for example, use theories
         | that are easy to "understand" by computers instead, whatever
         | that means.
        
           | kachnuv_ocasek wrote:
           | > Is there any field of study where the whole point is NOT to
           | make concepts understandable by humans?
           | 
           | Literary and art critique seems to be that field.
        
         | nathias wrote:
         | I don't think types really came from a natural way of
         | structuring logic but the work of epistemology, devloped mainly
         | in rationalism and Kant, for Frege I think we can find the idea
         | of types already at work in his main epistemological
         | distinction between sense and meaning.
        
       | justshowpost wrote:
       | They should stop teaching students stringly-typed scripting
       | languages and all the confusion will magically disappear. Toy
       | language like Pascal would suffice almost any educational faculty
       | if they are afraid what _introductory C_ course is too hard for
       | audience.
        
       | [deleted]
        
       | [deleted]
        
       | nathias wrote:
       | It was actually Frege that developed types as a solution to
       | Russell's paradox.
        
       | kongin wrote:
       | The type theory Russell invented has little to do with type
       | theory as used in programming languages other than the name. You
       | could make a strong case that grammar as a type theory for human
       | languages was an even earlier example of using types, but it's
       | one that's just as meaningless to us.
       | 
       | The early history of type theory for programming languages comes
       | from research into lambda calculus:
       | 
       | https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus
        
         | jweir wrote:
         | In Church's "A Formulation of the Simple Theory of Types" he
         | says he is heavily indebted to Russell.
        
           | Ericson2314 wrote:
           | After that, it really gets going with Girard's System F and
           | Martin Loef type thoery.
        
       | golemotron wrote:
       | "When a program and a programmer love each other very much..."
        
       | peter_retief wrote:
       | I really enjoyed Bertrand Russell's "A History of Western
       | Philosophy" but never realized how prolific he was in mathemetics
       | and that he wrote "Mathematical Logic as Based on the Theory of
       | Types"
       | 
       | I imagine it is a harder read but probably worth adding to my
       | growing stack of books waiting to get read.
        
       | hootbootscoot wrote:
       | Maybe I'm being terribly naive, but I don't see this from a maths
       | perspective at all, I see this as about the shape of data and
       | about memory allocation. If one increments pointers across an
       | array of data, one should know the stepping size, for example.
       | 
       | How is this (programming/practical) side of the word "type
       | systems" related to formal/mathematical "type systems"?
       | 
       | I missed that part
        
         | Koshkin wrote:
         | Types are a kind of constraints on what kind of logical
         | constructs in one case, and what kinds of computational
         | constructs in the other, are allowable. Logic and computation
         | are related, but they are not the same. (This difference is
         | also evident in how people think about monads in programming
         | vs. how they are defined and used in category theory.)
        
       | cb321 wrote:
       | If by "type systems/checking" one refers to constraints on "valid
       | operations in context" like comparisons, subtractions, additions
       | (or maybe in a prog.lang context assignment to lexically static
       | identifiers) then physical units of measure (seconds, meters|its
       | ancient cousins, etc.) were the first type system. I.e., one
       | cannot add/compare/etc. meters & seconds without conversion and
       | not all conversions are possible.
       | 
       | This remains the first such system most people learn in
       | elementary school. Historically such ideas arose along with
       | measurements themselves and computations upon their results like
       | comparison/subtraction/even taxation.
       | 
       | Yes, this is all (typically) manually checked and explicit
       | metadata symbols are carried through calculations. I still think
       | it's the primordial example that few mention in discussions like
       | these.
        
         | tel wrote:
         | Yeah, saying Russell invented types is fun but horribly wrong.
         | He invented other things and stood at the center of an
         | explosion in formal logics. But types are way old.
        
           | titzer wrote:
           | When people say a mathematician "invented" something, it's
           | understood that they didn't create it from nothing. Newton
           | didn't "invent" gravity, for example, he just came up with
           | the first mathematically rigorous theory for it. Same for
           | Russell and types.
        
             | wlib wrote:
             | This shows a funny way that the word invent has shifted in
             | meaning. Invent means "found" (almost literally "came upon"
             | in Latin) - yet for some reason it shifted to mean
             | "created" in a sort of "this is mine don't steal it from
             | me" sense. Then here we are clarifying that we _literally_
             | mean the actual meaning of the word. It's interesting that
             | math is considered impossible to have as intellectual
             | property, yet we think somehow that IP law is justifiable
             | for everything else.
        
               | titzer wrote:
               | Yeah, that is odd. The german word is "erfunden" which is
               | something like "completely found".
        
               | Koshkin wrote:
               | > we _literally_ mean the actual meaning
               | 
               | Also, in the everyday speech the word "literally" no
               | longer means what it was meant to mean.
        
           | JoBrad wrote:
           | Maybe he invented it the way that many people think the color
           | blue was invented? Causing everyone to see something for the
           | first time (especially when it was right under our noses) is
           | certainly impressive.
        
             | EarthLaunch wrote:
             | In my game design I've imagined two parallels to invention;
             | discovery and identification. All three are learning about
             | the world, but in slightly different ways.
             | 
             | Invention: I created that.
             | 
             | Discovery: I found that lying around.
             | 
             | Identification: I conceptualized a definition for that.
        
         | gimboland wrote:
         | I had that realisation when I saw this library:
         | https://pint.readthedocs.io/en/stable/ (for a dynamically typed
         | language, somewhat amusingly).
        
         | Aerroon wrote:
         | I had never thought of type systems in this way. If type
         | systems are effectively a "unit" attached to the data then you
         | can think of them as an unknown value in arithmetic.
         | 
         | Eg "4x + 4y = 40.4z"
         | 
         | x, y, and z here are units: x is meters, y is centimeters, and
         | z is decimeters. You could do the same with types. The computer
         | only knows the binary value, but the type assigns meaning to
         | the value just like physical units do.
         | 
         | This must've been really obvious to a lot of people, but it
         | only clicked just now for me. Thank you.
        
           | saeranv wrote:
           | This is exactly correct. And once you think of it as the
           | variable, you can then make the leap to thinking about it as
           | dimensions, which has some nice properties in physics i.e.
           | vector addition/subtraction leads to tuples of dimensions,
           | while the product doesn't - which explains why we can't
           | add/subtract different units, but we can multiple them
           | together.
        
           | kroltan wrote:
           | By analogy, that's also how type inference works. If you know
           | X, and either the type-rules for arithmetic, or Z, you can
           | figure out Y.
           | 
           | The constraints of their usage can be used to determine the
           | type required of that data.
        
           | 10000truths wrote:
           | I think of it the other way around - dimensional analysis of
           | units is a special case of a type system. Two units of
           | different measures cannot be added/subtracted with one
           | another. Units of the same measure (e.g. length: cm, in) are
           | trivially convertible via a scalar factor (e.g. 2.54 cm/in),
           | and units of different measures (e.g. length: m, time: s) can
           | be multiplied/divided to get a new unit of measure (speed:
           | m/s), although some such units might have no practical use.
        
             | cb321 wrote:
             | Dimensional analysis of units is absolutely a special case
             | (I said "primordial example") of type systems.
             | 
             | Proof: "constrained operations" and "notional objects" are
             | very limited/somewhat concrete -
             | "additive/comparative/conversion-y" and "measured things".
             | Modern notions abstract both ideas.
             | 
             | This is, however, not unimportant (not to suggest I think
             | @10000truths said so, exactly). In educational settings one
             | usually wants to begin with the most familiar examples and
             | _then_ generalize. { Why, this probably even applies to
             | greats like Russell. :-) } Further, in attributional
             | settings, giving general names /notions to things is often
             | less seminal than the things one is naming. (Both are often
             | worthy of attention, but I tend to push back when "namers
             | get all the credit".)
             | 
             | There is probably no single, famous big name like Russell
             | to attach to the idea of units of measure/standard
             | conversion/comparison. There may be a King of Olden Times
             | who beheaded those who didn't follow his particular
             | "system" when taxing crop growing areas or converting area
             | to expected weights or volumes of grain/etc. ;-)
             | 
             | In some ways the very idea of "money" itself incorporates
             | standardized exchange/comparison within it. Money itself is
             | taken to be pre-historic [1], though there may have been a
             | few original "coins"/tokens of money. Little tick marks in
             | clay ledgers may have been the first and we can be fairly
             | sure even before all the spice trade/wars that not all
             | crops/goods were deemed equal/additive unconverted. On the
             | contrary, money was useful specifically to help convert.
             | 
             | [1] https://en.wikipedia.org/wiki/History_of_money
        
             | HPsquared wrote:
             | Perhaps the fundamental dimension (mass, length, time etc.)
             | could be considered as the 'abstract base class', with the
             | specific units (meters, feet...) being derived classes.
        
               | cb321 wrote:
               | Perhaps. The
               | https://en.wikipedia.org/wiki/Buckingham_%CF%80_theorem
               | and Lord Rayleigh's refined ideas on units of measure
               | predated Russell's work. These ideas also include at
               | least 1 abstract-to-specialized transform like @HPSquared
               | suggests. Probably it was >1 transform in the sense that
               | (Mass, Length, Time) are conventional not fundamental.
               | 
               | Following up on that ">1" requires a bit of explanation.
               | Particle physics gets by with "natural" units where
               | basically everything is rational powers of one base unit
               | (MeV|energy|whatnot). SI units want like 7 or 8 or 9 base
               | dimensions to avoid fractional powers (kind of a losing
               | game since as soon as you have an equation with varying
               | integer powers on either side fractional powers
               | resurface).
               | 
               | Rayleigh may have used different English words than
               | "abstract base class" or "type" (maybe "kind", "sort",
               | etc.), but physicists definitely had "type ideas" more
               | abstract and formally symbolic than the units of measure
               | your grandma knew for recipes - before Russell said
               | anything about them.
               | 
               | I am not sure Planck was the first to realize the "pick
               | how many base dimensions you want" tricks - seems like
               | Rayleigh or even earlier may have been, but in 1899
               | Planck published (before Russell's 1903 Doctrine of
               | Types) a very abstract system of units such that major
               | constants are 1.0 and dimensionless [1]. That level of
               | conceptual abstraction (dimensionality - at least 1 more
               | than just concrete vs abstract) also clearly predated
               | Russell - but maybe only by a decade or two.
               | 
               | It seems to me uncontentious that physics/physicists were
               | in a very abstract type theory headspace with regard to
               | units of measure before Russell published that appendix.
               | The only point of debate would be "Does it count or did
               | Russell abstract it _so_ much further as to count as a
               | fundamentally different thing? " While I am no Russell
               | biographer, it would be unsurprising to me if he was
               | inspired either implicitly or even very explicitly by
               | such "just earlier than him" happenings in physics like
               | Planck units. These units famously "make physics look
               | more like pure math" which does not seem like something
               | Russell would have failed to notice.
               | 
               | As an "error catching mechanism" like types in
               | programming languages, units function exactly the same
               | way as programming types - just with the person as a
               | compiler "doing their own checksum" to make sure they
               | don't add apples & oranges. Calculation by hand was the
               | predominant kind of computation at the time. So this is
               | all coherent. At least "along the history of ideas arc"
               | leading to programming language types, there may be great
               | abstracters, but I doubt credit can be given to one
               | person like TFA article tries to except maybe as an
               | ambassador/evangelist/namer/maaaaybe first abstractor.
               | (Not trying to suggest realizing the importance of
               | something doesn't matter, but more make people realize
               | how much this particular line of thought is almost as
               | prehistoric as numbers themselves/quantification itself.)
               | 
               | Anyway, I find units for mass, speed, time, etc. useful
               | in explaining types (and checksums) to non-programmers.
               | Maybe you will, too.
               | 
               | [1] https://en.wikipedia.org/wiki/Planck_units
        
       | [deleted]
        
       | chrisbrandow wrote:
       | I'm still reeling from the three things I learned in this post:
       | 
       | 1. Bertrand Russell studied the "I am lying" paradox, which I
       | always thought was nothing more than a clever meme. 2. Bertrand
       | Russell is responsible for type theory 3. That type theory came
       | out of his study of the paradox.
        
         | __MatrixMan__ wrote:
         | The distinction between countable and uncountable infinities
         | also stems from the liar's paradox.
        
           | Koshkin wrote:
           | How so? The set of reals is uncountably infinite. Looks like
           | a self-contained fact (that can be proved by a "diagonal
           | argument").
        
         | ProfHewitt wrote:
         | Bertrand Russell introduced orders on propositions to block
         | 
         | construction of paradoxes such as "The Liar" [Russell 1908].
         | 
         | Without orders on propositions, the "self-referential"
         | 
         | proposition I'mFalse (such that
         | 
         | I'mFalse=!I'mFalse) could be constructed using the
         | 
         | following recursive definition:
         | 
         | I'mFalse:Proposition<i>[?]!I'mFalse. Since Ps is a
         | 
         | propositional variable in the definition, !Ps:Proposition
         | <i+1>. Consequently,
         | 
         | I'mFalse:Proposition <i>=I'mFalse:Proposition<i+1>, which
         | 
         | is a contradiction.
         | 
         | For more information see the following:
         | 
         | https://papers.ssrn.com/abstract=3603021
         | 
         | https://www.youtube.com/watch?v=AJP1VL7shiI
        
       | yuchi wrote:
       | Awesome read! But throwing the Godel incompleteness theorem at
       | your face at the end of the article without any specific
       | explanation is rude!
       | 
       | Now I want to understand in a similar fashion _why_ the
       | incompleteness theorem makes type systems incapable of preventing
       | all bugs!
        
         | shikoba wrote:
         | >Now I want to understand in a similar fashion why the
         | incompleteness theorem makes type systems incapable of
         | preventing all bugs!
         | 
         | You'll never understand because it's false. Godel theroem
         | proves that theorems aren't enumerable. But they're still
         | computably enumerable.
        
           | reuben364 wrote:
           | A more degenerate argument: a type system that rejects all
           | programs will prevent all bugs.
        
             | Koshkin wrote:
             | I have seen programmers 1) design a type hierarchy; 2)
             | struggle getting things compiled; 3) give up.
        
         | ProfHewitt wrote:
         | [Godel 1931] proposed the proposition I'mUnprovable (such
         | 
         | that I'mUnprovable =[?]I'mUnprovable) for proving inferential
         | 
         | undecidability of Russel's foundational theory, which had the
         | 
         | type-restriction on orders of propositions to prevent
         | 
         | inconsistencies. I'mUnprovable cannot be constructed in
         | 
         | foundational theories because strong types prevent
         | 
         | construction of I'mUnprovable using the following recursive
         | 
         | definition (see Diagonal Lemma [Godel 1931]):
         | 
         | I'mUnprovable:Proposition<i>[?][?]I'mUnprovable.
         | 
         | Note that ([?]I'mUnprovable):Proposition<i+1> in the right-hand
         | side of the definition because
         | 
         | I'mUnprovable:Proposition<i> is a propositional variable in
         | 
         | the definition [?]I'mUnprovable. Consequently,
         | 
         | I'mUnprovable:Proposition<i>=I'mUnprovable:Proposition<i+1>,
         | 
         | which is a contradiction.
        
         | tel wrote:
         | It's kind of a long range gesture. Normally, a judgement e: B
         | can only capture some truths about the program e, but if B
         | could encode arbitrary logical statements about e, what does
         | the system look like?
         | 
         | It turns out that if you have that system in code written as s
         | and then try to assert certain properties of it G, then it's
         | possible to design G such that s: G is so twisty and self-
         | referential that it can't be verified. Something to that
         | effect: strong logics have a tough time speaking completely
         | about themselves.
         | 
         | So that's at least one property for one program that's
         | unprovable no matter how powerful our type system is.
         | 
         | But that's so long range. Practically we can make type systems
         | that let us prove massive amounts of things using judgements
         | like e: B. The real issue is that it's just very hard and
         | expensive to make those systems practically eliminate even a
         | small subset of bugs.
        
         | reuben364 wrote:
         | Not qualified to answer, but my thought is that if you have a
         | statement that is true that cannot be proven, you could write a
         | program whose correctness implies the statement, then proving
         | the program correct amounts to proving something that can not
         | be proven.
        
         | ashton314 wrote:
         | Here's a small attempted answer:
         | 
         | Some programs in the Simply Typed Lambda Calculus [^1] have no
         | type--i.e. diverging programs. Even some programs that may
         | converge have no type. The Y Combinator, for instance, has no
         | type because it would require an infinite type.
         | 
         | From the Wikipedia article under "General Observations":
         | Given the standard semantics, the simply typed lambda calculus
         | is strongly normalizing: that is, well-typed terms always
         | reduce to a value, i.e., a l  abstraction. This is because
         | recursion is not allowed by the typing rules: it is impossible
         | to find types for fixed-point combinators and the looping term
         | O = (lx.x x) (lx.x x) . Recursion can be added to the language
         | by either having a special operator fix[?] of type (a - a) - a
         | or adding general recursive types, though both eliminate strong
         | normalization.              Since it is strongly normalising,
         | it is decidable whether or not a simply typed lambda calculus
         | program halts: in fact, it always halts. We can therefore
         | conclude that the language is not Turing complete.
         | 
         | Another way to look at it is this via the Curry-Howard
         | correspondence [^2]: for any mathematical proof, you can write
         | down a program that is equivalent to that proof. Verifying the
         | proof's result is the same as running the program. This is a
         | very exciting correspondence that runs _deep_ throughout
         | computer science and mathematics. (I highly recommend the
         | Software Foundations course I linked to below.)
         | 
         | Writing a non-terminating program is like writing one of these
         | self-contradictory logic statements: it has no proof of truth
         | or falsehood. Thus, the fact that we can write programs to
         | which we can assign some kind of type but that never terminate
         | is a way of demonstrating the fact that there are theorems that
         | are well-formed but have no truth assignment to them. (Godel's
         | Incompleteness Theorem)
         | 
         | [^1]:
         | https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus; see
         | also https://softwarefoundations.cis.upenn.edu/current/plf-
         | curren...
         | 
         | [^2]:
         | https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
        
           | barrenko wrote:
           | Can I ask in a very "noob" way - are types and theory
           | underneath something that runs in _parallel_ to Turing
           | machine and they together enable modern computing OR is
           | Turing completeness and Turing machine stuff _also_ part of
           | type  / lambda calculus theory?
           | 
           | Thanks.
        
             | reuben364 wrote:
             | Semantics of programs in type theory are most of the time
             | described by a small step relation which describes,
             | possibly non-deterministically, how to perform one step of
             | computation. Separately one can consider which terms are
             | values, indicating a successful, terminating computation.
             | Type theory would then involve proving properties about the
             | semantics of typed programs.
             | 
             | One can give a small step relation for a Turing machine and
             | one could possibly come up for a type system for it, but
             | generally one starts with the semantics for untyped lambda
             | calculus.
             | 
             | Some of the more advanced type theory stuff unifies the
             | notions of terms and types, meaning arbitrary computations
             | can happen in type checking and so properties about types
             | can involve properties about computation in general.
        
           | the_french wrote:
           | > Some programs in the Simply Typed Lambda Calculus [^1] have
           | no type--i.e. diverging programs.
           | 
           | Nitpick, because those programs have no type they are not
           | members of the Simply Typed Lambda Calculus but only of the
           | underlying _untyped_ calculus
        
             | ashton314 wrote:
             | You're right. My bad. Thanks!
        
           | reuben364 wrote:
           | I'm struggling to fit this into my current understanding. I
           | guess I don't really know how incompleteness looks in a type
           | theory setting. Terms are proofs, types are propositions. _
           | are theories, _ are models, _ are valid formulas?
        
         | didibus wrote:
         | > Now I want to understand in a similar fashion why the
         | incompleteness theorem makes type systems incapable of
         | preventing all bugs
         | 
         | I think the article has this a little wrong.
         | 
         | The theorem in programming terms I think simply means that
         | there are some programs for which another program can't prove
         | certain properties.
         | 
         | But in theory you can restrict yourself to writing programs
         | that can be proven completely.
         | 
         | Even then though, you won't eliminate all bugs, and I think
         | that's the part the article gets wrong.
         | 
         | Type system don't prevent bugs, they help you prove properties,
         | but the programmer still needs to come up with all properties
         | and edge cases themselves and set things up for the type system
         | to prove those, and there's a lot of opportunity for the
         | programmer to forget about a lot of properties or to have a bug
         | in how they're trying to prove it.
         | 
         | Finally, there are only so much you can prove in a general way,
         | in that there are only so many properties you can try to prove.
         | 
         | Just think about it, you can't go listing all possible inputs
         | and asserting all possible outputs because as a programmer that
         | would take way too long. So you try to come up with shortcuts,
         | which will be properties, maybe you say alright so for even
         | inputs I'd expect even outputs. Now maybe you use a type system
         | to prove that. But you can still see how you've not proven that
         | it returns the right output for every single input, just the
         | property that even should return even. And that still leaves
         | gaps for bugs.
         | 
         | But none of those have anything to do with Godel's
         | incompleteness theorem, those exist even if you restrict
         | yourself to programming only in a language for which type
         | systems can fully operate in.
         | 
         | Disclaimer: I'm no expert in this though.
        
           | davidgrenier wrote:
           | > Type system don't prevent bugs, they help you prove
           | properties
           | 
           | From a certain perspective, I disagree.
           | 
           | If I write a function, let's call it sort. It takes an array
           | of integers and returns an array of integers. I call the
           | function with an array and it so happens that the array
           | returned isn't sorted.
           | 
           | Is there a bug?
           | 
           | I would argue there isn't because the function makes no such
           | statement that the array returned should be sorted, or that
           | it should be a permutation of the input array.
           | 
           | How could any programming language/type system "catch all the
           | bugs" if no statements are made as to what constitutes a bug?
           | 
           | If I create a function in Idris and specify in the type
           | signature that the returned array will be a permutation of
           | the original and that it will be in increasing order, then
           | yes Idris will catch all the bugs.
        
             | didibus wrote:
             | If you redefine what others mean to fit your particular
             | point, you will obviously be correct, yet you'll have
             | failed to be correct to the intended meaning used by other
             | people.
             | 
             | Using your definition of bug, take for example a function
             | in an untyped language. Assuming a bug free implementation
             | of the language, I can say that no matter what code I
             | write, it will always be bug free, because the code will
             | always do exactly what the code specifies, and the function
             | made no statement to any particular behavior.
             | 
             | So if you define your "sort" to be a function that does
             | what your Idris type specification defined it to do, yes
             | your sort will be correct to it, assuming a bug free
             | implementation of Idris and its type system, and that the
             | type checker asserted it to be correct.
             | 
             | And yet, for the intended behavior to the users of the
             | program and the objective it was meant to achieve it's very
             | possible that your sort function doesn't work.
             | 
             | Or in other words take this Python code:
             | def sort(array):           return array
             | print(sort([2,1,3]))         [2,1,3]
             | 
             | Is this a bug? Or did I just mislabel the name of this
             | function?
             | 
             | I feel like that's the gist of your argument. You're
             | saying: it behaved correctly to the extent that I have
             | formally specified it, which in the case of Python I have
             | not formally specified anything so all behavior are valid
             | and correct.
             | 
             | But what good is that?
        
               | reuben364 wrote:
               | As good as you can get without being able the read the
               | programmer's brain to figure out what they really wanted
               | in the first place. To talk about correctness formally in
               | the first place, you need to specify what you want and
               | when you do that there is no guarantee that that is
               | actually what you want.
        
             | jdmichal wrote:
             | Though if you allow duplicates, that should be non-
             | decreasing order instead of increasing order?
        
               | didibus wrote:
               | I think the argument of OP would be that this wouldn't be
               | a bug, because Idris has proved increasing order, which
               | means that if the function allowed duplicate that could
               | result in non-increasing order Idris type checker would
               | have told you your function is incorrect.
               | 
               | Basically, the proof is the spec, and a bug is defined as
               | "not behaving as defined by the spec".
               | 
               | And assuming Idris is itself bug free in its language
               | implementation and type checker implementation and is
               | powerful enough to prove your given spec, then OP says
               | your function is guaranteed bug free.
               | 
               | The difference is that he defines bug as:
               | 
               | > When the function behaves outside what Idris has
               | proven.
               | 
               | While I think most people define a bug as:
               | 
               | > Doesn't do what it should have to provide the correct
               | intended user behavior.
        
             | MichaelBurge wrote:
             | In Python, a function named "sort" could take an array and
             | return an integer. Or None. Or crash. So knowing that it
             | returns an array of integers is an improvement over the
             | untyped case.
             | 
             | Types like "returns an array" or Java's checked exceptions
             | are attempts at narrowing the possible behaviors from a
             | function.
        
           | barrenko wrote:
           | Is this in a way related to something called "property-based
           | testing"?
        
             | didibus wrote:
             | It is and isn't.
             | 
             | The type system will try to prove that a program will
             | behave in some specific ways without needing to run the
             | program. The "specific ways it behaves" is what I call
             | "properties".
             | 
             | Property based testing will run the program and check as it
             | is running that it behaves in some specific ways.
             | 
             | In the end, they both can be used to demonstrate that some
             | properties of a program execution hold, but they go about
             | it differently.
             | 
             | The type system will use logic over the code semantics and
             | structure to assert that in all possible cases (combination
             | of inputs) the property will hold.
             | 
             | The property based testing will use sampling to assert that
             | in a lot of cases (combination of inputs), the property
             | will hold.
             | 
             | Unless you can sample the entire input range in a property
             | based test, you don't prove that it holds for all inputs,
             | just for a large number of them. While type systems tend to
             | prove for all inputs.
             | 
             | Also generally the type system will take less time to
             | execute its reasoning over the code, than the property
             | based test will take to run its sample set.
        
             | formerly_proven wrote:
             | Property-based testing is not proof-based, but stochastic;
             | you define the input space for a property fairly well, then
             | the testing framework samples random points within that
             | space. They also generally have reducers, which try to
             | create small failing inputs iteratively after finding one
             | failing input. The difference with fuzzing is mostly that
             | fuzzers are largely black box; the input space is left
             | largely undefined (e.g. AFL: bag of bytes), and the
             | property under test is usually "well, uh, does it
             | explode?". I don't think the popular property-based testing
             | frameworks use instrumentation to guide exploration of the
             | input space.
        
           | rocqua wrote:
           | I suspect that, given any type system / proof system for
           | correctness, there will be well behaved programs that cannot
           | be proven correct in the proof system, nor can they be
           | refactored to equivalent programs that can be proven correct.
           | 
           | I think perhaps the challenge 'write a doubly linked list in
           | Rust' is an example of this, but I am not sure.
        
             | didibus wrote:
             | What you say is possible, and my knowledge stops short of
             | it.
             | 
             | I know this is true:
             | 
             | > given any type system / proof system for correctness,
             | there will be well behaved programs that cannot be proven
             | correct in the proof system
             | 
             | But I'm not sure if this is as well:
             | 
             | > nor can they be refactored to equivalent programs that
             | can be proven correct
             | 
             | If so, then there will be some programs whose behavior
             | simply cannot be proven by a type system. Though I wonder
             | if for those programs there could be another type system
             | that could prove it? So maybe with a combination of type
             | systems it can still be achieved?
             | 
             | Either way, what I meant to say was that even for programs
             | that can, the problem of "bugs in a program" has not been
             | completely solved. Since there can be bugs in your proof,
             | and there are often missing assertions in it as well, in
             | that you'll generally prove a property of it, but not
             | directly assert precisely what you expect of each and every
             | inputs, and you might also simply forget to prove certain
             | key properties.
        
         | jerf wrote:
         | It's simple. A type system can be complicated to itself do
         | computations. See dependent types, for instance. Once you open
         | that up you have the full range of complications for computing
         | itself in your type system. There are type systems sufficiently
         | complicated to declare "the type of Turing Machines that halt
         | in a finite amount of time" as a type; this causes obvious
         | problems.
         | 
         | It is a well-known phenomenon that it can take surprisingly
         | little to turn a computation system Turing complete; witness,
         | for instance, C++'s accidentally-Turing-complete templates. So
         | it can wiggle in without you even realizing it at design time.
         | Or, if you're using a type system with sufficient power like
         | dependent types, it is surprisingly easy to put together four
         | or five recursive types that, on their own, are all perfectly
         | computable, but together turn out to create something that is
         | Turing complete. Write a program with a few hundred of these
         | and Turing completeness is almost bound to sneak in somewhere.
         | 
         | See https://www.gwern.net/Turing-complete , and ponder
         | applications to just-slightly-too-powerful type theories. Or
         | https://github.com/Microsoft/TypeScript/issues/14833 .
        
       | bjornsing wrote:
       | > Similarly, type theory wasn't enough to describe new
       | foundations for mathematics from which all mathematical truths
       | could in principle be proven using symbolic logic. It wasn't
       | enough because this goal in its full extent is unattainable.
       | 
       | This is an often repeated claim, but I can't shake the feeling
       | that it's an incorrectly drawn conclusion (of Godel's
       | incompleteness theorems).
       | 
       | My understanding: It's true that there are such "mathematical
       | truths without proofs within axiomatic system X", but they are
       | unreachable by mathematicians and symbolic logic / machines
       | alike. If a mathematical statement has no proof within ZFC we
       | (usually) don't call it a "truth", regardless of if it is true or
       | not.
       | 
       | Please correct me if I'm wrong.
        
         | UncleMeat wrote:
         | > If a mathematical statement has no proof within ZFC we
         | (usually) don't call it a "truth", regardless of if it is true
         | or not.
         | 
         | This depends on your formalism for logic. Intuitionist logic
         | merges the meaning of "true" and "has a proof" but classical
         | logic does not have any problem with having a thing be called
         | "true" while also not being provable.
        
         | simias wrote:
         | >If a mathematical statement has no proof within ZFC we
         | (usually) don't call it a "truth", regardless of if it is true
         | or not.
         | 
         | I'm not sure I understand what this changes to the overall
         | situation. Remember that a formal system also can't prove its
         | own consistency, so you can't just ignore bits that you don't
         | like and expect that it doesn't matter, it's possible that one
         | of these "true but unprovable" propositions would actually
         | render your system inconsistent, in which case it ought to be
         | addressed.
         | 
         | I mean in the end Godel's own proof effectively relies on this:
         | he builds a proposition using the rule of the system saying
         | "this proposition has no proof" and at this point you either
         | have to accept that the proposition is incorrect, but then your
         | system is inconsistent (it can be used to derive falsehoods) or
         | that it is correct but then the system is incomplete. So you
         | may decide not to call it a truth but you also can't really
         | call it a falsehood.
         | 
         | I would also wager that mathematicians generally prefer
         | incomplete to inconsistent systems, if which case you have to
         | accept that these unprovable propositions are indeed truths.
         | 
         | It's like the "does the set of all sets not containing
         | themselves contain itself" paradox, you can't just ignore it as
         | irrelevant because it puts into question your entire formal
         | system.
        
           | Filligree wrote:
           | > I'm not sure I understand what this changes to the overall
           | situation. Remember that a formal system also can't prove its
           | own consistency, so you can't just ignore bits that you don't
           | like and expect that it doesn't matter, it's possible that
           | one of these "true but unprovable" propositions would
           | actually render your system inconsistent, in which case it
           | ought to be addressed.
           | 
           | Is that true? It seems to me that if a statement is
           | unprovable, then even if it would in theory render your
           | system inconsistent, there's no way that can actually happen.
           | 
           | Since, if you found any example of a chain of logic which
           | leads to an inconsistency, that would count as a proof of the
           | original statement. Which is unprovable, and as such can't
           | have proofs.
        
             | simias wrote:
             | But what if the statement is "this statement can't be
             | proven"? You haven't proven the statement, you've proven
             | that it can't be proven.
             | 
             | See https://youtu.be/HeQX2HjkcNo?t=926 for instance which
             | is (IMO) a fairly good explanation of the proof and why you
             | end up with something both true and not provable.
             | 
             | Penrose's "Emperor's New Mind" also discusses this topic at
             | length (within the context of whether human brains can be
             | emulated). I wasn't convinced with his argument regarding
             | AI but his overview of the problem of what's computable and
             | what's not is rather good if a bit dense.
        
               | Filligree wrote:
               | > Penrose's "Emperor's New Mind" also discusses this
               | topic at length (within the context of whether human
               | brains can be emulated). I wasn't convinced with his
               | argument regarding AI but his overview of the problem of
               | what's computable and what's not is rather good if a bit
               | dense.
               | 
               | As far as I'm aware _physics_ is computable, so I don 't
               | buy that one. :)
               | 
               | At most one might argue that there are high-level
               | optimizations to be had, which can't be used because
               | they're uncomputable. In which case is it really an
               | optimization?
        
               | simias wrote:
               | That's part of his arguments, that some corners of
               | quantum physics that still elude us (quantum gravity and
               | all that) may be undecidable. He then goes on to explain
               | how he thinks these phenomena could have an influence on
               | cognition.
               | 
               | I don't really buy this theory, but the book is basically
               | 80% "here's what we know" and 20% "here's what I
               | conjecture" so it's still a worthwhile read IMO. It's
               | worth noting that it's a book from the late 80's however,
               | so it's a bit outdated in some parts (he didn't know that
               | the universe's expansion was accelerating in particular,
               | which renders his discussion of the curvature of the
               | universe a bit obsolete).
        
             | ProfHewitt wrote:
             | It turns out that foundational mathematical systems can
             | prove their own consistency.
             | 
             | See the following:
             | 
             | https://papers.ssrn.com/abstract=3603021
             | 
             | However, the proof is _not convincing_ because the proof is
             | 
             | valid even if the system is inconsistent!
        
         | garmaine wrote:
         | I believe it is correct to say there are undecidable theorems
         | in some limited type systems which cannot be proven, but which
         | can be once you extend the type system (e.g. by making it
         | higher-level or dependent).
         | 
         | So in those limited systems we know there are things that are
         | "true" or "false", but which can only be proven when we have a
         | superset of the original tools available.
         | 
         | And by extension, maybe in our more complete type system there
         | are things which can't be proven yet, but could perhaps be if
         | there were better tools available?
         | 
         | Maybe another angle to look at it: one of the things which is
         | difficult to prove is the halting problem. Maybe you can't
         | prove that a given program will terminate. But you could run
         | that program and watch it terminate, and then you'd know it
         | does. In that sense there is a "true" answer, but one which
         | couldn't have been proved in advance from type theory.
         | 
         | But I admit I only have a high-level understanding of these
         | things. Type theory and axiomatic mathematics is something I'd
         | like to know more about.
        
         | auggierose wrote:
         | ZFC comes with First-Order Logic and is certainly complete. You
         | are right in that sense.
         | 
         | Let me be more specific: Given a sentence that is true in every
         | model of ZFC, you can prove that sentence. This is true because
         | ZFC is a first-order theory, and this is a property of ANY
         | first-order theory.
         | 
         | People that are pointing out incompleteness are talking about
         | something else: They are talking about the fact that there are
         | sentences S where neither S nor not S is provable.
         | Additionally, there is a meta reason why S is believed to be
         | true. For example, take S = encoding of "ZFC is consistent".
         | BUT: This sentence S is true in some models of ZFC, and false
         | in others, because otherwise the completeness of ZFC would make
         | it provable / disprovable. So this boils down to the meta-truth
         | being different from truth in a model.
         | 
         | Disclaimer: I just recently looked seriously into this stuff,
         | because I was wondering about completeness of a logic I
         | invented. So if the above is wrong, please somebody correct me!
        
         | rocqua wrote:
         | I believe most constructive mathematics (mathematics not using
         | the law of the exluded middle) is actually weaker than peano
         | arithmetic. Certainly there is less that has been proven in it.
         | Most type theory is constructive afaik, hence it might be
         | weaker than even Godel implies.
        
           | zozbot234 wrote:
           | Constructive mathematics can prove anything that classical
           | mathematics can, and more. The only catch is that
           | propositions which are proven classically must always be
           | stated in the _negative_ fragment of the logic: there 's no
           | general duality as there is in classical logic, but a
           | positive and a negative statement mean something different.
        
             | rocqua wrote:
             | How does this work for the banach tarski paradox?
        
             | Koshkin wrote:
             | > _and more_
             | 
             | I am not sure about that - there are some "pathological"
             | phenomena in classical math that are rejected (for the lack
             | of proof) in constructive math.
        
               | [deleted]
        
         | rssoconnor wrote:
         | There are true (in the sense of Tarski's definition of truth)
         | but unprovable (P1) sentences in any consistent (sufficiently
         | powerful) system. Which of these true sentences are unprovable
         | varies from system to system.
         | 
         | P1-sentences (see arithmetical hierarchy) are closed formulas
         | of the form, [?]x:N. P(x), where P is some decidable predicate.
         | i.e. all quantifiers in P are bounded. Thus for any given fixed
         | value for x, say n, P(n) can be expanded out into conjunctions
         | and disjunctions of atomic equalities and inequalities between
         | closed arithmetic formula and those (in)equalities can in turn
         | be decided (in principle) by calculation. Notice that 'x' is a
         | formal variable in the language of first order logic, while we
         | are using 'n' as a meta-variable denoting a term representing a
         | numeral, e.g. 42.
         | 
         | S1-sentences are closed formulas of the form, [?]x:N. P(x),
         | where P is again a decidable predicate.
         | 
         | Some basic first order logic yields that the negation of a
         | S1-sentence is logically equivalent to a P1-sentences and
         | similarly the negation of a P1-sentence is logically equivalent
         | to a S1-sentence.
         | 
         | Important Fact: All true S1-sentences are provable (in any
         | sufficiently powerful system). Why? Suppose [?]x. P(x) is true.
         | Then, there is some numeral 'n' where P(n) is true. Since P(n)
         | is decidable we can prove P(n) when P(n) is true. And, by the
         | rule of existential introduction, [?]x.P(x) is provable from
         | P(n).
         | 
         | Now, let us consider some consistent system T (which is
         | "sufficiently powerful"). For example, we can take T to be ZFC
         | if you think ZFC is consistent. We can construct a Goedel-
         | Rosser sentence for T, let's call this sentence G. G is a
         | P1-sentence by construction, so we can put G in the form of
         | [?]x:N. P(x) for some decidable predicate P.
         | 
         | Now either G is true, (i.e. [?]x:N. P(x) is true) or !G is
         | true, (i.e. [?]x:N. !P(x) is true). Suppose !G is true. Since
         | !G is (equivalent to) a S1-sentence then T proves !G. But by
         | the Goedel-Rosser incompleteness theorem, this would imply that
         | T is inconsistent, contradicting our assumption.
         | 
         | Thus it must be the case that G is true. However G is
         | unprovable in T because, again by the Goedel-Rosser
         | incompleteness theorem, if G were provable in T, then T would
         | be inconsistent, contradicting our assumption.
         | 
         | Thus G is an example of a true sentence that isn't provable in
         | T.
        
       ___________________________________________________________________
       (page generated 2021-10-01 23:02 UTC)