[HN Gopher] Church vs. Curry Types
___________________________________________________________________
Church vs. Curry Types
Author : todsacerdoti
Score : 69 points
Date : 2023-12-18 14:25 UTC (8 hours ago)
(HTM) web link (ericnormand.me)
(TXT) w3m dump (ericnormand.me)
| karmakaze wrote:
| I can't tell if there's anything here I should be trying to
| learn. Best I can tell it's choosing obtuse naming for Static and
| Dynamic typing.
|
| > Summary: Static vs dynamic typing debates often flounder
| because the debators see from two different perspectives without
| knowing it. Learning to identify the two perspectives can calm
| the discussion. The tension between the two perspectives has led
| to Gradual Typing and other technologies.
|
| I'm not buying this. The best pragmatic answer for myself is that
| static types is better in large or across teams. Dynamic typing
| is good for solo or early development by a small team. Gradual
| typing is the best you can get when a product developed with
| dynamic typing grows to be developed by many.
| awkward wrote:
| It's comparing static typing as exists in Haskell or OCaml to
| static typing as exists in C++ or Typescript. To dynamically
| type a program in the first group, you would have to manually
| make a giant type that includes everything and use that
| everywhere. In the second, you can program dynamically by
| turning the safety off.
| hardlianotion wrote:
| Are you sure you meant C++ in the second group? In Typescript
| you can program dynamically, perhaps. In C++, you'd have to
| cast everything brutally, which I don't think most would call
| programming dynamically, or do what you propose for OCaml.
| beders wrote:
| The ability to reason about code, to split it into modules that
| can be developed independently etc. is not dependent on having
| a static type system. It is dependent on having a
| specification. "Types" is just one form of, in many languages
| quite limited, specs.
| nyssos wrote:
| > "Types" is just one form of, in many languages quite
| limited, specs.
|
| That's the "Curry types" perspective described in the
| article, yes. "Church types" are syntax, not specification.
| In Haskell, for example main = print (5 +
| "foo")
|
| isn't a program that fails to conform to its spec, it's just
| not a program. It has no meaning and no behavior, not wrong
| behavior. You can (but should never) `unsafeCoerce` `5` to
| `String` or `"foo"` to `Int` and get something broken out -
| but they'll be _different_ broken things, and neither will in
| any sense be the compiled form of the `main` function above.
| ltr_ wrote:
| I agree with your sentiment and resonate with some of my
| opinions on languages (been reading about TT and abstract math
| in general for a while as a hobby)
|
| my opinions so far (subject to change as i read more):
|
| dynamic typing = just one type [0]. yes, good to test ideas
| quick, to develop quick gigs that i don't care about and tiny
| snippets/scripts, i use js for that.
|
| gradual typing = guaranteed mess evolving large code bases, I
| guess the gradual typing thing in practice never happen and
| when it happens your IDE becomes red and your compiler errors
| explode, i don't recall where i read about that but it made
| sense to me, i used groovy to take advantage of the optional
| typing but i was the only dev in that project.
|
| sound type system + parametric polymorphism = the best thing
| around, although i prefer to have explicit type annotations.
|
| no formal math object behind a language = the language does not
| exist, just , absurd rules ,idioms and syntactic mess that you
| have to remember and tied to specific monkey patched
| implementations to compensate incoherent foundations, sadly
| unavoidable, as most, if not all popular languages do not
| exist,.
|
| Yes, I've been reading Harper a lot.
|
| [0] https://existentialtype.wordpress.com/2011/03/19/dynamic-
| lan...
|
| edit: formatting
| awkward wrote:
| This is a great distinction! With typescript getting wider use,
| I've found myself frustrated by the limitations of bolt on type
| systems. Knowing how deep the divide is from languages with fully
| inferrable types gives me more appreciation for how much work
| went into the current state of inferencing tools.
| shepherdjerred wrote:
| I found TypeScript to be _incredible_ once you use a runtime
| validation library like Zod or io-ts.
|
| TypeScript itself has a great type system, but the culture in
| TS is to use `any` at the boundaries of your app, e.g. in
| parsing HTTP requests, local storage, URL routes -- any data
| outside of the type system. This greatly reduces how much a
| type system can help you.
|
| Zod, io-ts, and other similar library, allow you to enforce
| type constraints at runtime in a very sleek, easy way, and they
| don't require you to write your types twice, either.
|
| I've found that this makes TypeScript feel almost as safe as
| languages with much better type systems, e.g. Haskell.
| malcolmgreaves wrote:
| It's wonderful to see an article about this topic that's grounded
| in rigorous thought. And that references the long, rich history
| of folks who have worked hard to articulate the problems and
| solutions one encounters when programming.
|
| From my experience and training, I'm biased towards the Church-
| style of thought for representing all programs. My reasoning is
| simple: ultimately, we write programs to solve a problem. The
| implementation of the program is a secondary concern to producing
| a correct solution. An incorrect solution is just about as useful
| as a program that immediately exits after starting. [1]
|
| Using the type-oriented perspective to programming shifts the
| paradigm from *how* a computer program works to *what* it is
| accomplishing. When we're sure we have the "what" correct, we can
| then perform transformations on our program to optimize its
| implementation to achieve different second-order properties on
| the "how." Do we need this to run faster? Do we need this to use
| less memory? Do we need this to re-use a pool of threads or
| network connections? Do we need this to use SIMD instructions
| because that's what our hardware supports? The possibilities for
| "how" can be seemingly endless: they're always use-case specific.
|
| However, when we're thinking about solutions, we necessarily must
| be focused on achieving a singular goal: the problem is solved.
| When we use types to represent our program semantics, we gain an
| awesome level of problem-solving ability. We can ensure that our
| program semantics really do solve our problem. We don't need to
| wait for an actual execution of our program to determine whether
| or not we've solved it. In a laymen's sense of the term, we gain
| an exponential speed-up in our ability to make correct programs.
| It makes insanely complex problems solvable. And it makes the
| simple ones trivial.
|
| We stop wasting our time focusing on re-solving every detail that
| needs to go into the execution of our program. Instead, we are
| able to build upon foundations that _we know work_. Could we get
| to where we are today if we had to to write every new program in
| hardware-specific assembly? Implement every piece of
| functionality going down to the level of "how will my processor
| be able to execute this?" Of course not! We've realized this in
| many cases: when we use compilers, we transform a solution
| defined in one space to an equivalent solution defined in
| another; we implement our web app endpoints using the types and
| semantics of WSGI; we make our complex distributed systems on the
| back of infrastructure with well-documented components that
| specify the types of things that they support and what they
| provide. We're surrounded by examples of successful large systems
| that rely explicitly on defining types.
|
| Ultimately, it's my strongly held belief that for the software
| industry to mature to genuine engineering, it's imperative for
| our collective thought to embrace mathematical rigor. This is not
| to gatekeeper others: everyone can learn. We must believe that we
| can understand this well enough to teach absolutely _anyone_ who
| is willing to put in the time and effort to understand.
|
| If we're serious, we need to graduate from the notion that we're
| only writing programs to we're coding up solutions to our
| problems. Where we define and reason about the problem at a
| conceptual, abstract level and then move towards reusing and
| combining solutions from Computer Science.
|
| --
|
| [1] And it may be _less useful_ than that: consider a program
| that produces _ever so slightly wrong_ results. Is it feasible to
| always verify that its output is always correct _each time that
| it is run_? If one needs to do that work, then is it actually
| automated? If it's not automated, why put in the effort to
| program a machine to perform the work?
| m0llusk wrote:
| Documentation of data flow and rigorous testing might be an
| alternative and superior way to get correct answers, at least
| in some contexts. Just because a type has been assigned to some
| data does not mean that is necessarily correct. All input needs
| to be checked and can have various mistakes, errors, and
| distortions. Type systems are one method that may help to keep
| things in check, especially with complex business logic
| maintained by large and dynamic teams. But if the business
| logic is well defined and the teams are small then typing can
| be a digression that takes time and effort without bringing
| about superior results.
| galaxyLogic wrote:
| The way I look at it is that "types" are assertions, which
| tell us whether our assumptions about the program are correct
| or not. There is no such thing as a "correct program". There
| is only our correct or incorrect understanding of what the
| program is doing. Type-declarations help us gain the
| understanding about that.
|
| Types express data, which allow the assertions to be
| structured and composite, and express assertions about the
| real data running through the program when it executes.
|
| So "types" in a sense do not exist as such, we are simply
| writing "predicates" in some formal language. Whether those
| predicates are checked at compile-time or run-time is a
| separate question.
|
| The reason many people assume that 'types' are a thing is
| that when using a "statically typed" language you are
| required to enter them always, and therefore they take on
| more or less existence of their own.
|
| Using a typed programming language you are always writing TWO
| programs, one that does what you want it to do, and another
| that observes and verifies the code's adherence to your
| assumptions about it. Thus types are a secondary utility,
| they are a harness that limits what your program can do.
| Types help you to better understand and to gain trust in your
| understanding of your program.
| malcolmgreaves wrote:
| I agree that documentation is important. There's several
| methods by which we can communicate program semantics. I
| firmly believe that it's important to do this. For instance,
| there can be bugs in how the types are specified that don't
| align to what one wants. This is where written technical
| documentation or a specification can be incredibly useful.
|
| I'd like to hone in on the specific example you bring up to
| push back against the value of strongly and statically typed
| programs:
|
| > All input needs to be checked and can have various
| mistakes, errors, and distortions.
|
| This is something that should always be 100% represented in
| the type system. Input validation is something where the
| value of typing is probably at its best (in my opinion, at
| least). Input validation is a function that accepts data of
| some "raw" type and then produces a value of the "clean" type
| -- or an error. This is `raw --> (clean | error)`. If you're
| calling this kind of function, you will always need to
| include logic for handling both possibilities. Not doing so
| is a mistake.
|
| Think of a compiler: it accepts many text files (or strings
| thereof) and produces either a binary program or a list of
| errors.
|
| > But if the business logic is well defined and the teams are
| small then typing can be a digression that takes time and
| effort without bringing about superior results.
|
| Some programming languages have rather useless static type
| systems, where they will rely on exceptions. Perhaps this is
| the experience that you're drawing on? I would like to make
| minimal assumptions here, but I acknowledge that I hear this
| kind of line of thinking a lot when I talk to folks that have
| have only used statically typed languages like Java. I'll
| pick on Java for a bit, but there are more languages with
| weak static typing that this applies to.
|
| Java's type system is a real shame. It's very verbose and it
| doesn't do much useful work beyond very simple things. A
| major reason for this is because of the language designer's
| decision to make extensive use of exceptions. Even checked
| exceptions are a pain to deal with: they're not regular
| values, forcing programmers to use completely different
| syntax to deal with them. These poor ergonomics caused people
| to reach for unchecked exceptions. (Unchecked == every single
| exception that can be raised in Python, if that comparison
| helps.)
|
| Unchecked exceptions may provide runtime safety, but at the
| expense of making code verbose while also making it not
| actually handle the problem. It's a concept that needs to be
| used sparingly. Array index out of bounds? Sure. Your input
| validation failed? Terrible. Much of programming -- in the
| large or small -- needs to deal with the possibility of an
| action failing. It's a mistake to write code that can perform
| a failing operation without checking to see if it does fail.
|
| > digression that takes time and effort without bringing
| about superior results.
|
| I'd also like to focus on this part a bit. There are many
| good programming languages that have type inference. At some
| point, one always has to write the types of something. It may
| be obvious, such as e.g. `var x = 10`: we know that `10` is
| an `int`, so it's trivially clear that `x` is an `int` too.
| Good type systems are designed to push this idea as far as
| possible. If you're using a function that requires particular
| operations on its input, a type checker can see these
| constraints and infer the right type. Similarly, if the
| output of a function is used in a particular way, the type
| checker can see this and make inferences.
|
| There are statically typed programming languages where
| writing down types is actually fairly rare. They're usually
| only necessary at a few key points. Or necessary to add
| specific constraints to ensure that a particular thing is
| only used in a more specific way.
|
| For example, you may have a language where an operator like
| `+` is defined for both floating point and integer division
| as well as concatenating strings and lists together. You may
| write something like `def add(x) = x + x`, which could be
| inferred to be ok to work with anything where `+` is defined.
| But, you may actually want to ensure that this actually only
| works for floating point numbers, so you put in a `x: float`
| in the parameter list. You might want to do this so that your
| compiler can realize that it can implement this `add`
| function using very fast floating point hardware
| instructions.
| tsimionescu wrote:
| Note that types are just one way of embracing mathematical
| rigor. They are very good at representing simple properties of
| code and checking them very quickly, but they are very very
| hard to use to represent more complex properties (say,
| specifying that a function takes an array and a comparison
| function and returns the same array in an order defined by the
| comparison function).
|
| No language in common use today asserts anything beyond the
| most basic properties of a program using types. Idris and Coq
| do that, but they are extremely obscure even compared to
| Haskell or OCaml.
|
| It's also important to note that types are quite rarely used
| outside formal computer science. For example, even in
| theoretical physics, types are not commonly used in published
| proofs (and no, measurement units are not types).
| malcolmgreaves wrote:
| I'm glad you brought up languages that use dependent typing!
| And sure, static typing is just one tool of many: so it's not
| going to be the end-all-be all in the quest to answer the
| eternal question: "does my program work the way I think it
| does?" More refined, my claim is that it is a super powerful
| tool that brings substantial benefits. And that it's a
| mistake to not use this tool.
|
| No tool is perfect. There will always be room for improvement
| as we use our current crop of tools, observe how they serve
| our needs and don't serve them, then use that knowledge to
| build even better tools.
|
| > No language in common use today asserts anything beyond the
| most basic properties of a program using types. Idris and Coq
| do that, but they are extremely obscure even compared to
| Haskell or OCaml.
|
| With enough time, we'll see the influence of dependent typing
| crop up in more common languages. For instance, there's the
| Rust crate dfdx, which uses dependent typing to encode the
| shapes of tensors. [1]
|
| > It's also important to note that types are quite rarely
| used outside formal computer science.
|
| This isn't quite right. At least, I think what you're saying
| is that "physics proofs don't set up typing rules therefore
| they don't use types." But then again, proofs aren't
| necessarily programs. Is every physics proof computable? I'm
| not a physicist so I'll defer that to an expert. My
| understanding is that physicists aren't interested in
| computability when they're communicating their ideas about
| the universe. For the subset of proofs that are computable,
| I'll make the argument that there exist types on the
| mathematical structures and operations on them that they're
| defining. These types needn't be written type and formalized
| with typing rules to exist.
|
| [1] https://docs.rs/dfdx/latest/dfdx/
| ChrisArchitect wrote:
| (2018)
| layer8 wrote:
| I don't think this is necessarily a binary distinction. Even in a
| statically-typed language, so ostensibly with Church types, you
| can imagine a refined type system being added as type
| annotations. For example, many statically-typed languages do not
| support range constraints on their built-in numeric types, and
| such constraints could be added as type annotations, analogous to
| type annotations in a dynamically-typed language. A real-life
| example of this are linters with linter annotations in comments.
|
| So instead of a dichotomy it is really a lattice of possible type
| refinements. And the top of the lattice corresponds to the
| single-type view of dynamically-typed languages. The concrete
| language chooses a particular point in the lattice as its
| universally enforced typing level. (Or sometimes just by-default
| enforced, if there are escapes like "unsafe".) But the choice is
| among all points in the lattice, not just two points.
|
| As a side node, it is etymologically amusing that Haskell doesn't
| have Curry types.
| zokier wrote:
| > I don't think this is necessarily a binary distinction. Even
| in a statically-typed language, so ostensibly with Church
| types, you can imagine a refined type system being added as
| type annotations.
|
| Considering author provides LiquidHaskell as example, I'd
| imagine they'd agree with this sentiment
|
| > As a side node, it is etymologically amusing that Haskell
| doesn't have Curry types.
|
| If I understand correctly, LiquidHaskell corrects that :)
| mrkeen wrote:
| I'm so lost.
|
| (I'm firmly in the static-typing camp, coming from an internal
| philosophy of "computers are supposed to do things as fast as
| possible, and not make mistakes".)
|
| > Intrinsic types are great because you are guaranteed to have a
| mathematically-sound safety net at all times.
|
| Sounds great, even if I take it with the usual grain of salt.
|
| > Extrinsic types are useful because you can apply multiple type
| systems to your code
|
| If I don't get the mathematically-sound safety net, why would I
| apply 1 type system to my code, let alone n?
| layer8 wrote:
| This lets different parts of your code use different type
| systems, so to speak, by picking and choosing which annotations
| you specify. Another real-life example is the "unsafe" mode
| some statically-typed languages have, which effectively
| switches to a laxer type system.
| zokier wrote:
| > Another real-life example is the "unsafe" mode some
| statically-typed languages have, which effectively switches
| to a laxer type system.
|
| Are you referring to Rust here? Because I believe unsafe
| there doesn't change type checking or type system in any way
| zokier wrote:
| > If I don't get the mathematically-sound safety net, why would
| I apply 1 type system to my code, let alone n?
|
| I don't it is intended to imply that extrisic type systems
| wouldn't be sound.
|
| As for why you'd want several type systems, one benefit I
| imagine is that different type systems could verify different
| aspects of your program, instead of forcing to cram everything
| into one type system.
|
| Afaik just because type system is sound doesn't alone mean that
| it can be used practically to prove every interesting property
| of a program.
| arxanas wrote:
| It's interesting to note that extrinsic typing is not limited to
| retrospectively-designed typecheckers. One of the big
| philosophical differences between Haskell and OCaml is that
| Haskell permits (encourages) type-directed compilation, while
| OCaml's type annotations don't affect the program semantics
| (although they affect whether the program compiles). But both are
| far away from modern-day retrofitted gradual type systems.
___________________________________________________________________
(page generated 2023-12-18 23:01 UTC)