[HN Gopher] Ante: A low-level functional language
___________________________________________________________________
Ante: A low-level functional language
Author : cheesestain
Score : 332 points
Date : 2022-06-17 07:39 UTC (15 hours ago)
(HTM) web link (antelang.org)
(TXT) w3m dump (antelang.org)
| ynoxinul wrote:
| Looks interesting. How are closures implemented? Is it possible
| to use Ante without heap? Are function arguments passed by value?
| jfecher wrote:
| Hello, author here! Closures are implemented with the
| environment parameter as an extra parameter on a function type.
| So internally a function `i32 - i32 -> i32` (wonky function
| type syntax currently with - separating arguments) which uses
| an environment of type String is represented as a pair of the
| function and its environment: `(i32 - i32 - String -> i32),
| String`. The same way C++ and Rust represent their closures.
|
| Function arguments are passed by value currently, though I may
| explore pass by move and other options in the future.
|
| I hope for ante to be usable without a heap, though the `ref`
| type automatically handling lifetimes makes this more
| difficult. These refs compile to an equivalent of destination-
| passing in C, but can require dynamic allocation if a function
| creates a ref and another function calls the first in a loop. I
| also plan on having an Allocate effect for whenever a function
| can allocate, so that it is trivial to handle this with your
| own handle that can do anything. This would be an easier
| alternative to zig's approach of "pass the allocator everywhere
| manually," but there are still things I need to iron out, like
| how it interacts with the lifetime inference issues above, so
| its still in the design phase.
| Syzygies wrote:
| I avidly follow descendants of Scheme and Haskell (I figured out
| for example how to build Idris native on an M1 Mac), and I plan
| to follow Ante. I just spent $5k of lunch money on an M1 Ultra
| Mac Studio, to use as a math compute server. How do I keep it
| busy? Haskell. My "Hello world" parallel benchmark, using reverse
| search to enumerate the 66,960,965,307 atomic lattices on six
| atoms, took 14 hours in 2009. It now takes four minutes.
|
| I'd love to start again with a new, small language. I love
| Haskell, but it's decades old, and one struggles to work with it
| without being torn apart by the gravitational tides of the black
| hole at the center of its galaxy. All jokes aside, Haskell really
| does reward a PhD in category theory, or the self-taught
| equivalent later.
|
| Functional languages make parallelism easier, and Haskell has put
| particular effort into this: Adding a dozen lines to a program
| can keep the 16 performance cores of my Mac Studio running full
| tilt. I have yet to read any account of how someone else's
| favorite language makes parallelism easy, that is aware of the
| comparison with Haskell. If I thought there was a reasonable
| alternative, I'd jump at trying it.
|
| Rust appeals because it replaces GC with modern control of
| allocation lifetimes. I love how Ante will also explore this. I
| use cyclic data structures; how one handles this functionally,
| and how one handles this using reference counting, is exactly the
| same problem.
|
| Parallelism should be the first consideration, designing any new
| language in 2022. Parallelism is _the_ reason I keep using
| Haskell, despite newer alternatives. Any language that runs on a
| single core is a toy. This is a plea!
| JoshCole wrote:
| Also have a computer with a ridiculous number of cores (32)
| that I want to keep busy.
|
| I've found Clojure to dominate the ease of parallelism. For
| problems that are actually stupidly parallel the number of
| changes to your code is often measured in letters rather than
| lines. For example, you might change map to pmap or you might
| change reduce to reducers/reduce and change nothing else but
| now be fully leveraging all your cores. For problems that
| aren't stupidly parallel I feel like Clojure shines even more.
| Most languages didn't implement software transactional memory
| and encourage it as the default way to work with things that
| vary over time, but Clojure did. On account of that you can
| have a not massively parallel algorithm that would be full of
| tricky lock code in another language and still end up
| leveraging all your cores by doing something as simple as map
| or pmap, but not run into horrible issues.
| thurn wrote:
| I'm confused how data structures work in this language, and
| there's no documentation about it as far as I can tell. Is this
| going to be a Rust-style vectors+iterators system? It it going to
| use purely-functional concatenative structures? The first example
| you show invokes `map` and `sum` over an Array, but what is this
| code actually doing? Making a copy of the data in the array?
| Creating a mapping iterator?
| jfecher wrote:
| Rust-style with Vecs+Iterators is the current style, yes. So
| the map+sum example creates an iterator rather than copying the
| array.
|
| I'd like to remove iterators in favor of Generators
| (implemented with Algebraic effects) in the future though since
| they're much easier to define. Actually switching over is on
| hold until effects are implemented and speed tests are done to
| ensure they're compareable speed to iterators (which they
| should hopefully be due to monomorphisation of effects +
| handlers).
| alilleybrinker wrote:
| If anyone is interested in other programming languages also
| written in Rust, there's a community-maintained list!
| https://github.com/alilleybrinker/langs-in-rust
| dthul wrote:
| Looks very cool! Can somebody enlighten me what's happening in
| the Algebraic Effects example? Specifically this part:
| handle f () | flip () -> (resume true + resume false) /
| 2.0
|
| Does `handle f ()` call `calculation` and the `| ...` part
| "injects" the `flip` effect? I am also quite confused by the part
| following `| flip ()`. It somehow returns true or false with a
| probability of 50%? And why does this give you the expected value
| of the whole calculation function in the end?
| yakubin wrote:
| Explanation based on my familiarity with Koka:
|
| This expression matches on the effect _flip()_ and handles it
| with the code on the right hand side of _- >_ which becomes the
| value of the function invoking the effect. In this case the
| handler runs the continuation for both possible values (
| _resume true_ and _resume false_ , i.e. the function was called
| once, but it will return twice, with _true_ and _false_
| substituted in the place of _flip()_ in the place which used
| this effect) and returns their mean as the average.
|
| I.e. each time _calculation_ calls _flip()_ , the effect
| handler will continue the function for both _true_ and _false_
| and then take the mean of those two results. Comments explain
| that _flip()_ should simulate a coin flip, which generally
| gives equal probability (1 /2) to _true_ and _false_. Each
| _flip()_ in _calculation_ was simulated with equal number of
| _true_ and _false_ values, so it fits the expected
| distribution. Each time taking the mean of those values is just
| an expansion of the integral calculating the expected value, as
| per definition of expected value.
|
| Note that there is some recursion here. While the _flip()_
| handler is executing for the first invokation of _flip()_ ,
| another instance of the handler will execute for second and
| third invokations (inside _resume true_ ). I.e. it calculates
| the expected value by taking the mean of the expected values of
| each branch created by a flip. When a branch doesn't have any
| more flips inside, its expected value is taken to be the
| constant value that this branch returns.
| jfecher wrote:
| Author here, a good way to understand algebraic effects is as
| "resumeable exceptions." In this case the expected_value
| handler says to run `f ()` and whenever that computation
| "throws" a `flip ()` effect to handle it by resuming the
| computation with the value true returned for flip. The
| continuation continues as normal, subsequent uses of flip are
| also handled until the computation finishs. Then we evaluate
| the rest of `+ resume false) / 2.0` and resume the computation
| (a second time!) in the same place, this time with the value
| false. Finally, we add both values returned and divide by two
| to get the expected value.
|
| This way of computing expected value works because for each
| flip we essentially have a case-split: the value can be true or
| false with a 50-50 chance, so the expected value of the whole
| thing is is 0.5 * the result of the true branch + 0.5 * the
| result of the false branch!
|
| This is more of a fun use of effects than a practical one. If
| you're still curious about effects, check out the full page on
| them here: https://antelang.org/docs/language/#algebraic-
| effects which includes some actually useful effects like State,
| Generators, looping constructs, parsers, etc.
| dthul wrote:
| Thanks for the explanation, I will definitely check out the
| docs on algebraic effects!
| jkarni wrote:
| I don't fully understand it yet, but I think it resumes twice,
| adds the results, and divides by two.
|
| the `| flip ()` is pattern matching on that effect term, I
| believe. So essentially: when you see `flip ()` inside anything
| in the argument of `expected_value`, capture the continuation
| there, run that argument once with `true` and once with
| `false`, add the results, divide by two.
| HelloNurse wrote:
| How does string interpolation work? In what context, exactly, are
| the placeholders checked and/or evaluated? How are missing or
| incompatible placeholder values handled? The semantics aren't
| obvious (for instance, how do you deal with string inputs, which
| would contain placeholders that cannot be checked in advance?).
| jfecher wrote:
| String interpolation works by expanding to the concatenation of
| several strings. So a string like "the ${foo}." is expanded to
| "the " ++ foo ++ ".". There are some things I'd like to change
| about the current design. Namely it should probably defer to a
| StringBuilder of sorts, and it should possibly do it lazily so
| that interpolation can be used in log calls without worry of
| whether logging is enabled.
|
| These are all checked at compile-time though, since
| interpolation can only be done inside string literals we can
| check for all uses of ${expr} inside literals and ensure e.g.
| that expr is convertable to a string. Since these are only in
| string literals, all placeholders are known in advance so there
| are none that cannot be checked at compile-time. (A string like
| "foo \${" ++ "foo}" must both escape the interpolation begin ${
| and is concatenated to the actual string "foo ${foo}" with no
| interpolation. Otherwise it would be very confusing having
| interpolation happening in unintended circumstances (and would
| make string operations less efficient by having to check for
| this)).
| HelloNurse wrote:
| So it's purely syntactic sugar around concatenating plain
| string literals and various expressions, it has no relation
| to print or other I/O, and in case a string comes from input
| (BTW, I don't see any mention of serious IO besides print for
| "logging") or from some computation it isn't subject to
| interpolation.
|
| I don't think limiting string interpolation to enhanced
| string literals in code (leaving out generic strings) in
| order to allow static checking is a practically acceptable
| restriction. For example, logging frameworks meant for long-
| running application tend to specify message templates in
| configuration files or databases, possibly hot-reloading them
| at runtime.
| jfecher wrote:
| More or less, yes. I may later change it to some type like
| `Interpolated env` to represent a lazily interpreted string
| that interpolates the types in env. So "${1u8} and ${2}"
| would be typed as Interpolated (u8, i32).
|
| I'd like to keep this orthogonal to IO or other specific
| use cases so that it is applicable to any use case users
| may have rather than tailored to existing ones. You're also
| correct there is no real logging framework or anything of
| the kind except print really. Ante is still in a quite
| early state and there aren't really any substantial
| libraries to speak of. As-is, interpolation is really just
| some nice sugar to make some common operations more
| ergonomic, like f-strings in python.
|
| I think applications that need their own dynamic
| interpolation requirements should define their own
| interpolation to handle their specific needs. It would be
| unacceptable from a performance and predictability
| standpoint for example to support the example of loading
| files into a string and have them interpolated at runtime
| automatically for all file loads.
| markoutso wrote:
| How is this low level exactly?
| roelschroeven wrote:
| It's not, or only partially. "To try to bridge the gap between
| high and low level languages, ante adapts a high-level approach
| by default, maintaining the ability to drop into low-level code
| when needed." says the website.
| jfecher wrote:
| Author here, to me the lack of a pervasive tracing GC and
| values not being boxed by default are important for low level
| languages. That and maintaining the ability to drop down and
| use low level constructs like raw pointers for optimization
| or primitives for new abstractions are essential.
| maze-le wrote:
| Pointers and manual memory management apparently.
| whoomp12342 wrote:
| low is relative :-D my thought exactly.
|
| IMO the only low level language is assembly. Everything else is
| some form of abstraction. C/C++ and the likes I tend to call
| lower, since in 2022 it is closer to the hardware, and then
| sugar languages like python, c#, js, and the likes I call high
| level.
| wtetzner wrote:
| Even assembly languages are abstractions.
| shirogane86x wrote:
| quoting the github readme:
|
| > In general, ante is low-level (no GC, values aren't boxed by
| default) while also trying to be as readable as possible by
| encouraging high-level approaches that can be optimized with
| low-level details later on.
| whoomp12342 wrote:
| with no GC, is there compile time segfault checking like
| rust?
| jfecher wrote:
| Author here, that is the purpose lifetime inference is
| meant to serve. It automatically extends lifetimes of
| `ref`s so that they are long enough. A key advantage of
| these is avoiding lifetime annotations in code, at the cost
| of some lack of control since the lifetime is increasingly
| handled for you. You can also opt out by using raw pointer
| types though and easily segfault with those.
| qwertox wrote:
| I'm left wondering if the CSV-example considers commas inside a
| string field which is surrounded by quotes.
|
| The other examples are way too confusing for me, except the first
| one.
| jfecher wrote:
| It does not! It was meant to be a rather simple example showing
| what using iterators look like. Perhaps interesting to note
| that I'll likely be removing iterators in favor of generators
| which are easier to use. With monomorphisation of effects it is
| my hope that they'll be just as efficient, but Iterators will
| remain until tests prove that is the case.
|
| As for the examples, they definitely showcase things that
| aren't present in most languages and can thus be confusing.
| That is on purpose though, since ante for me is a language to
| experiment with novel features I find interesting. The second
| example on lifetime inference for example can be thought of as
| like rust's lifetimes but completely inferred and instead of
| issuing errors for "x does not live long enough" it will
| instead automatically extend the lifetime of x. So my hope is
| it is easier to use at the loss of some control (control can be
| regained by using other pointer types like `Ptr a` for a raw
| pointer used to implement other things on top of).
| DeathArrow wrote:
| Very interesting stuff! I didn't even thought is possible to
| implement a low level functional languages, I always thought a
| functional language requires a ton of abstractions.
| chriswarbo wrote:
| It's certainly possible, although certain styles are more
| appropriate than others.
|
| For example, ATS is ostensibly an ML-like language; but its
| "low level" examples look more like "C with stricter types"
| (e.g. its Cairo examples)
|
| https://en.wikipedia.org/wiki/ATS_(programming_language)
| pjmlp wrote:
| Not at all, there only needs to exist enough low level
| primitives (aka compiler intrisics) to build the whole stack.
|
| That is how Lisp Machines and Xerox PARC workstation OSes used
| to be built, or Mirage OS for a more recent example.
|
| And I do consider Lisp functional, since when I reached
| university, the options were Lisp dialects, Mirada was fresh,
| Standard ML and Caml Light were rather new.
|
| There was yet to appear the mentality that OCaml / Haskell ===
| FP.
| dgb23 wrote:
| Aside from Lisps being yet again an example of "Simpsons
| already did it.", there are also other examples and projects
| that go roughly into this direction. Like Roc[0]. Tim
| Sweeney[1] seems to be working on integrating functional
| programming into games development, which is highly competitive
| and very performance sensitive. John Carmack also has been
| advocating for using functional programming techniques in game
| development (partially).
|
| Generally I think FP has a bit of a misguided bad rep when it
| comes to performance. What makes code slow is indirection that
| a compiler cannot reason about and work that doesn't need to be
| done. There are FP techniques that can help a compiler or CPU
| to generate/transform efficient code and ones that can confuse
| them so to speak. Needless mutation can sometimes lead to bad
| code as well (from a compiler or CPU perspective), because it
| solidifies control. Functional programming is fundamentally a
| constraint, computers, interpreters and so on like constraints,
| because they can then go about shuffling around stuff without
| breaking those constraints if that makes sense.
|
| [0] https://www.roc-lang.org/
|
| [1] https://en.wikipedia.org/wiki/Tim_Sweeney_(game_developer)
| epolanski wrote:
| At the end of the day what prevents functional languages from
| being a good fit for low-level programming is that effective
| low-level programming cannot be referentially transparent.
|
| Might be my 2 cents, but i think Rust can hit a very sweet spot
| for functionally-leaning low-level effective programming.
| actionfromafar wrote:
| I can't see why not - in normal computing, there is I/O. On
| embedded it also just "I/O" but a little more brutal than
| streams. Still pretty much the same though, at some point you
| must reference the outside world.
|
| What am I missing?
| chriswarbo wrote:
| > At the end of the day what prevents functional languages
| from being a good fit for low-level programming is that
| effective low-level programming cannot be referentially
| transparent.
|
| I'm not sure I agree. "Straightforward" FP, e.g. a bunch of
| functions defining local vars and calling other functions,
| can be pretty hard to make low-level (it pretty much assumes
| a GC, and indirected closures; although tech like linear
| types can help).
|
| However, the more abstract, point-free, pattern-heavy FP
| seems like a reasonable fit (e.g. pipes/conduit)
| throwaway17_17 wrote:
| I disagree, the reason almost all 'functional' programming
| can not be said to be referentially transparent is because of
| the level of abstract the designers take as a base. The work
| Greg Morrisette did (which include Robert Harper for some
| branches) on Typed and Dependently-Typed Assembly could be
| used as the basis for a 'C-level' programming language that
| is 'functional'.
|
| At a slightly higher level, if memory is explicit, and that
| would include input and output buffers and the like, then it
| is possible to make functions exist inside a 'memory monad',
| similar to Haskell's state monad. Then any and all operations
| involving the memory used can be made referentially
| transparent.
|
| Now, the real question is would anyone want to program in
| such a style? I know I wouldn't mind, it's while I broke down
| and designed a personal programming language for my daily
| use. But it's a matter of taste and tolerance.
| deltaonefour wrote:
| Functional is always at odds with low level programming because
| of heap allocation. You can't control it because of immutability.
| Just a simple map operation does a heap allocation. How does Ante
| avoid this problem and give the user control of the heap?
|
| The mechanism should be made clear in the introduction as
| browsing the documentation doesn't make it clear to me.
| jfecher wrote:
| The plan is to give users control through the Allocate effect
| which can be handled in any way desired as long as it returns
| some memory. It is similar, but easier to use since it is an
| effect, to zig's approach of "pass the allocator everywhere." I
| say easier to use since effects are automatically passed around
| where necessary and propagated via function signatures and can
| be inferred.
|
| The specific design is still in question though. For one, an
| effect like Allocate will be almost ubiquitous throughout each
| function so care is needed to not clog up signatures too much.
| There's a few potential solutions here if you're interested.
| From including Allocate within the row-type of a larger effect
| like `IO = can Allocate, Read, Write, ...` to encouraging
| effect inference on functions versus manual annotation.
| skavi wrote:
| In the case of map operations, Ante is using Rust style
| iterators over collections. In Rust, iterators are just as fast
| as loops.
|
| The author said somewhere else in the thread they want to
| switch to generators implemented with effects eventually.
| djoldman wrote:
| dotproduct [1, 2, 3] [4, 5, 6] //=> 22
|
| ????????
| jfecher wrote:
| Whoops, typo. I'll update the website
| a3w wrote:
| 4+10+18, so 32 base 10, or hex: 20, but in base 15 this
| actually works out.
| tomcam wrote:
| This had me giggling, thank you
| sargun wrote:
| Is there a reason not to force a linear typing system (at first)
| -- a la mercury? To simplify the lifetime analysis? And then in
| later versions that can be relaxed to affine typing, and then
| subsequently normal lifetimes?
| jfecher wrote:
| Lifetime inference originates from region inference which is
| actually completely unrelated to linear/affine/uniqueness
| typing. Uniqueness typing can definitely complement lifetime
| inference, but isn't really generalizeable to it.
| Escapado wrote:
| The syntax looks a little funky to me but it's still quite
| interesting. Is there a reason why you would make fn(1) and fn 1
| equivalent? For me personally it makes readability worse and
| looks strange when chaining functions like in their last example
| on their landing page.
|
| On mobile horizontal scrolling through the code snippets will
| trigger switching to the next snippet on my phone.
| louthy wrote:
| > Is there a reason why you would make fn(1) and fn 1
| equivalent?
|
| 1 and (1) are isomorphic. A single term tuple can be converted
| to the single term, and vice versa. Having an implicit
| conversion doesn't seem too crazy.
|
| The biggest issue I suspect would be confusion about the most
| idiomatic way, or a mix of styles in real-world code bases,
| that causes confusion or inconsistencies (increases cognitive
| load for the reader).
| IanCal wrote:
| I'm not sure I like a single item tuple being equivalent to
| just the item. Can you ask for the length of a tuple? The
| length of a tuple with two 100 element lists would be 2, and
| if you looked at the tail the length would be 100.
| layer8 wrote:
| Right. In particular if you identify tuples with lists
| (which seems reasonable), you run into typing problems,
| because singleton lists/tuples suddenly have to be unified
| with the element type.
| dannymi wrote:
| I agree. It's better to have single-item tuples still
| require a comma.
|
| (1) - the number 1
|
| 1 - the number 1
|
| (1,) - a tuple with one item, which is the number 1
|
| 1, - a tuple with one item, which is the number 1
| louthy wrote:
| To be clear, I didn't say they were equivalent. They're
| equivalent up to isomorphism, not equal.
| [deleted]
| derriz wrote:
| I don't think this applies in this case. The brackets here
| are used for resolving precedence only. They are a syntactic
| feature which are not represented in the language semantics.
|
| Where brackets are used in some languages to construct
| tuples, you generally need special syntax to represent the
| case for 1-tuples, like Python where "(1)" is equivalent to
| "1" but "(1,)" is a 1-tuple containing a single 1 value.
|
| Also in most FP semantics, x and the 1-tuple containing x are
| not equivalent so the mathematical isomorphism doesn't hold.
| The tuple itself could be undefined (bottom/unterminating or
| null, if the language has such a concept), or could contain
| an undefined value or could be completely well-defined. These
| three cases are not represented in the semantics of the
| unbundled value.
| curryhoward wrote:
| > Is there a reason why you would make fn(1) and fn 1
| equivalent?
|
| For the same reason you'd make 1 + 2 the same as 1 + (2).
| Parentheses can be used to group arbitrary subexpressions.
| HelloNurse wrote:
| If fn 1 is an allowed function call syntax then fn (1) should
| be synonymous because 1 is expected to be the same as (1)
| except for a slightly different parse tree; but there might
| be good reasons to make fn 1 not a function call (for
| example, the implicit operator in a sequence of two
| expression without parentheses could be string concatenation
| rather than function application).
| curryhoward wrote:
| > the implicit operator in a sequence of two expression
| without parentheses could be string concatenation rather
| than function application
|
| You can design a language which uses `e1 e2` to represent
| any binary operation you like, but I'd argue that function
| application is more common than string concatenation, so
| it's more deserving of that syntax. Plus, it plays nicely
| with currying.
| deltaonefour wrote:
| Haskell does this. Basically languages that are following the
| ML style have this syntax including Haskell. You must not have
| experience with this family of languages at it is very common
| and a huge part of functional programming.
|
| A good number of "functional programmers" only have experience
| with JavaScript these days.
| Escapado wrote:
| Close, I saw this when dabbling with Nim and I remember I
| found it confusing, since I mostly write TypeScript and that
| is not a thing there so my post was mostly my ignorance
| speaking. I guess when one is used to it it will not look
| confusing at all!
| tazjin wrote:
| > Is there a reason why you would make fn(1) and fn 1
| equivalent?
|
| If your standard function call convention is just `f x`, but
| you also support precedence operators, then `f (x)`
| automatically becomes possible.
|
| It reminds me of an old Lisp joke, though: f
| x -- too mathematical! (f x) -- too many parenthesis!
| f(x) -- just right!
| gpderetta wrote:
| > f x -- too mathematical!
|
| :) yet we are very happy to use the same syntax in shell
| scripts or the shell itself.
| lyxsus wrote:
| We're not, but your point is still valid.
| georgyo wrote:
| I've heard this argument before, but this isn't true.
|
| In shell, f x y z, x y z are all augments to f. Doesn't
| matter if f takes one argument, all are passed to the
| function.
|
| With many functional languages this gets very confusing. IE
| what ie what does `f f x` do? In shell I know for sure.
|
| In the example f f x, it might be easy to parse. But in f x
| y z, any of x y z might be functions.
| nemaar wrote:
| Most functional languages parse a b c d e f as a(b, c, d,
| e, f), it does not matter what b, c, d, e, f are. Do you
| know any language where this is different?
| dwenzek wrote:
| OCaml and Haskell parse `a b c d e f` as `((((a b) c) d)
| e) f`.
| jerf wrote:
| And while _different_ than Algol-descended languages, I
| don 't think that's particularly _confusing_. (Not that
| you were saying so, just continuing the conversation.)
| You can put together a confusing expression with it, but
| I can put together confusing things with the Algol syntax
| with not too much effort too. I 've got the source code
| with my name on the blame to prove it.
| georgyo wrote:
| I thought more languages did this but at least nix and
| ocaml do not actually behave like I thought.
|
| In Ruby however it is a bit more ugly def
| f x x + 1 end puts f f 1
|
| > 3
| the_af wrote:
| I don't understand your objection, what output would you
| like to see instead?
| lgessler wrote:
| GP's point is that while yes, we know since `f` has arity
| 1 there's no ambiguity, in general you might not have the
| arity of any given function fresh in your head, and
| therefore can't tell (in Ruby) just from looking at `f f
| 1` whether it means a single invocation of an arity 2
| function, or two invocations of an arity 1 function
| the_af wrote:
| Ah! Right. It helps to have all functions be of arity 1
| to disambiguate, yes.
| tomp wrote:
| No functional languages do that.
|
| OCaml parses a b c as ((a b) c). In case the compiler can
| determine that a is a function taking 2 arguments, it
| will optimise the code, so that it's effectively a(b, c).
| But in general, that's not possible, especially in the
| case where the compiler determines that a is a function
| with a single argument (in which case, it's return value
| must be another function, which is in turn called with c)
| or when a is a first-class function (e.g. passed as an
| argument)
| zeckalpha wrote:
| Many counter examples to this, xargs for one.
| georgyo wrote:
| xargs is not a counter example. It is not a shell
| builtin, it is a program that takes a bunch of string
| arguments like every other program a shell would call.
|
| xargs -0 -n1 bash -c 'mv $1 ${1//.js/.ts}' --
|
| Everything to the right of xargs is a string argument
| passed to xargs.
| zeckalpha wrote:
| So an identifier representing a function? Isn't that the
| same for most languages?
| layer8 wrote:
| I guess the difference is that nesting calls (commands) is
| much less common in the shell, and (closely related)
| commands don't really have a return value.
| DougBTX wrote:
| On the shell, it is all the other way around, so to use
| command substitution it is like this: f
| $(g x)
|
| To pass the output of g x to f.
| gpderetta wrote:
| is it? In practice I find that my shell one liners are
| orders of magnitude more complex than what I would dare
| to write in any other 'proper' language:
| grep "hello ($(cat patterns.txt| tr '\n' '|'|grep ')$^)"
| <(ssh other.host "cat ~/file.txt") |tee >(grep 'a'
| >As.txt) >(grep 'b' > Bs.txt)
|
| [yes, gratuitous use of cat, sue me]
| mellavora wrote:
| As an old lisp fan, I never got this. (f x) and f(x) have the
| same number of parenthesis.
|
| and the nice thing about (f x) is that the parenthesis group
| f with x; so you have the whole call inside the ().
| Consistent and simple to understand.
|
| vs i.e. print(f"a string"), where it isn't even clear that
| the "f" is a function call.
| layer8 wrote:
| > the nice thing about (f x) is that the parenthesis group
| f with x
|
| The drawback is that they are put on the same level,
| whereas in most people's minds the function is a
| fundamentally different thing from the argument(s). The
| "f(x)" syntax reflects that asymmetry.
| agumonkey wrote:
| What creates that distinction ? in the lisp / fp world
| you quickly stop considering functions as separate
| entities.
| layer8 wrote:
| The function represents the operation or computation you
| want to perform. The arguments represent inputs or
| parameters for that operation or computation.
|
| Of course, theoretically you could also view the function
| as a parameter of the computation and/or the arguments as
| specifying an operation (in particular if those are also
| functions), but for most concrete function invocation
| that's not generally the mental model. E.g. in "sin(x)"
| one usually has in mind to compute the sine, and x is the
| input for that computation. One doesn't think "I want to
| do x, and `sin` is the input of that operation I want to
| do". One also doesn't think "I want to do computation,
| and `sin` and `x` are inputs for that computation". It's
| why you may have mentally a sine graph ranging over
| different x values, but you don't imagine an x graph
| ranging over different functions you could apply to x.
| agumonkey wrote:
| But even in high school topics start to talk about
| functional equations (calculus, e/ln). I'm not sure the
| <function> vs <value> doesn't come from the mainstream
| imperative paradigms and only that.
| tikhonj wrote:
| The distinction isn't between functions and values in
| general, it's between _the function being called_ and
| _the arguments passed to the function being called_. The
| difference isn 't in the things themselves, it's in the
| role that they play in the specific expression we're
| reading.
| agumonkey wrote:
| This argument is rather strange. Maybe for people who
| never interacted with different -fix notations ? Human
| language binds concepts with arguments in all kinds of
| direction .. I'd be surprised this is enough to annoy
| people.
| layer8 wrote:
| Natural language isn't precise and a lot is inferred from
| context. Exact order does often not really matter. In
| formal languages, however, you want to be as unambiguous
| and exact as possible, so it makes sense to use syntax
| and symbols to emphasize when elements differ in kind.
|
| Incidentally, that's also why we use syntax highlighting.
| One could, of course, use syntax highlighting instead of
| symbols to indicate the difference between function and
| arguments (between operation and operands), but that
| would interfere with the use of syntax highlighting for
| token categories (e.g. for literals of different types).
| agumonkey wrote:
| We also alternate between various positions in
| programming languages, so it's not a matter of formal
| precision.
| layer8 wrote:
| Not sure what you mean by "alternate between various
| positions".
| junon wrote:
| That's the joke. The number of terms doesn't change, and
| the last two have the same number of parens. The statements
| relate quantities of those things as though they're a
| problem, when in reality the "just right one" only changes
| the order slightly.
|
| Hence the joke. It's one of those jokes that earns a loud
| sigh from me, rather than a chuckle.
| YetAnotherNick wrote:
| > (f x) and f(x) have the same number of parenthesis.
|
| That's the joke
| quickthrower2 wrote:
| I have heard that if you count { and ( as parens, a Java
| program for example has just as many parens as. lisp one.
|
| A lisp paren can do both jobs: expression and scopes.
| layer8 wrote:
| > A lisp paren can do both jobs: expression and scopes.
|
| Using different symbols for different purposes makes
| sense, it helps humans to parse correctly faster.
| quickthrower2 wrote:
| It does, but there are still many different purposes. In
| JS:
|
| ( can mean: function call, part of an expression to be
| evaluated first, regex capture group
|
| { can mean: scope [of ... loop, if, lambda, function
| etc.], object definition, string interpolation code i.e.
| ${..}, class definition
|
| There are probably other things I am not thinking of.
|
| The one that trips me in JS is code like x.map(v =>
| {...v, v2}) breaks because the compiler sees the { as the
| beginning of a scope, not the object definition I
| intended.
|
| The working solution is x.map(v => ({...v, v2}))
|
| But x.map(v => v+1) is allowed.
|
| I don't think the compiler could figure out what you
| meant because an object definition might be a lambda
| function body. For example { myvar } can be both an
| object definition, and a function that returns myvar.
| gorjusborg wrote:
| Sure, why care about splitting algorithms into functions
| and naming them appropriately when you can just write it
| all in a 3k line function.
| kazinator wrote:
| That's right, which is why the parenthesis is combined
| with a leading symbol like (let ...).
|
| Mainly, you don't look at the parenthesis when reading;
| you look at that _let_ and your eyes rely on indentation
| for structure.
| hyperhopper wrote:
| Okay, but now you have to look at more things.
|
| Also, the ending paren doesn't have anything to help you
| see what it is on its own.
| lispm wrote:
| Just ignore the parentheses. Setting them to the
| background color may enforce that.
| kazinator wrote:
| You're not supposed to look for ending parentheses in
| Lisp; getting the right number of them is more or less
| your editor's job, and they are maximally stacked
| together like this: )))), as much as the given nesting
| and indentation permit. Given some (let ..., the closing
| parenthesis could be the third one in some )))))
| sequence; you rarely care which one. If it's not matched
| in that specific ))))) sequence where you expect, then
| that's a problem.
|
| )))) is like a ground symbol in a schematic:
| (+5V (+10V (-10V
| (INPUT3 ...)))) ----- local
| "ground" for all the above (different
| circuit)
| layer8 wrote:
| That doesn't seem very user-friendly. ;) Or at least
| that's always my perception when programming Lisp.
|
| As a side note, I believe that different people
| fundamentally have different programming languages that
| objectively suit them best, due to differences in their
| respective psychology and way of thinking and perceiving.
| It's interesting to discuss trade-offs, but in the end
| there is no single truth about which language is better
| overall -- it depends on the task and on the person.
| There's no "one size fits all". What's valuable is to
| understand why a certain syntax or language might work
| better for certain people.
| hota_mazi wrote:
| It's a stupid joke.
|
| The negative reactions that a lot of people have toward
| Lisp's parentheses are not because of the call function
| syntax but because parentheses are used everywhere else in
| Lisp's syntax.
| agumonkey wrote:
| I wish we had stats about sexps (we shall call it the
| kinthey scale). As a kid what you said was the first thing
| my brain caught on. It compresses the number of things I
| had to remember it's ~always (idea arguments...). We can
| now discuss interesting problems.
| throwaway17_17 wrote:
| I had the same feeling when I first started using a lisp
| (which was Scheme to work through SICP and the
| Ableman&Sussman MIT course). I was utterly entranced by
| the simplicity and uniformity. It was absolutely a factor
| in the way syntax works in my personal language (which I
| use for almost everything in my day to day). I really do
| agree that learning a lisp can truly expand the way in
| which a dev views and thinks about code.
| agumonkey wrote:
| Brown university PLT labs also vouched for this
| approache. Their textbook starts with a descriptions of
| sexps as syntax saying that's the only parsing theory
| covered here.
| arethuza wrote:
| Pity it doesn't include a line for: x f
| silon42 wrote:
| x.f and x.f()
| cwillu wrote:
| That line necessarily comes after all the arguments about
| it.
| xico wrote:
| It seems the parenthesis are only used to indicate the
| priority, like in Haskell, so you wouldn't really use them for
| `fn (1)`.
| eatonphil wrote:
| Those two function calls are the same thing in OCaml and
| Standard ML.
| aloisdg wrote:
| Coming from OCaml, F# uses the same syntax. Works very well.
| [deleted]
| aconst wrote:
| Hello, I just wanted to point out that if I had not read comments
| here and had not looked for a second example linked to "Job", I
| would not have realized there were dots under the first example.
| You may want to make them more visible ^^;
| coryfklein wrote:
| Very interesting to see you essentially replace Tuples with Cons!
| I'll be following that development for sure. Coming from Scala,
| Tuples are entirely how you represent Product and then that has
| tons of downstream implications on case classes and constructors.
| This leads to situations like Scalaz creating NonEmptyList to get
| Tuple-like behavior out of List. So your approach has the
| potential to unify these similar but different types and I find
| that fascinating!
| Garlef wrote:
| Looks great!
|
| A few questions (hopefully the author still reads it):
|
| * Any plan for support of arrow/monad comprehensions?
|
| * Semi-related: When it comes to generators it might be worth to
| consider making them clonable (see
| https://github.com/pelotom/burrido)
| jfecher wrote:
| (1): No support, some monad or early-error-return sugar used to
| be considered but effects cover most of the usecases of monads
| and are easier to use so I plan on emphasizing them. As for
| arrows, I have to say here that I've actually never used them
| in haskell so I don't know how useful they'd be.
|
| (2): Thanks for the resource, I'll look it over :). I remember
| seeing a vaguely related note that multicore OCaml used to
| provide a `Obj.clone_continuation` function but no longer does.
| Of course, ante's algebraic effects design is rather different
| so that may not apply anyway. It may depend on the Generator
| whether it is cloneable or not. Some generators are just
| functions that can be arbitrarily cloned, but others are
| closures which may have restrictions in whether their
| environment is cloneable. Ante's clone story in general needs
| more work.
| ncmncm wrote:
| This looks like it could become a very interesting language.
| Unusually for a new language, I spotted zero red flags.
|
| I did do a quick ^f for "destructor" and "drop"...
|
| The intro mentions memory management, and inferring lifetimes.
| How do I make something happen at end of lifetime?
| bruce343434 wrote:
| RE lifetime inference: how does it infer lifetimes for mutually
| recursive functions that return refs to what would be local data?
| Does it automatically infer a dynamic stack?
| jfecher wrote:
| Lifetime inference compiles to destination passing so for most
| cases a single stack allocation in a prior function can be
| used. For your example of mutually recursive functions
| allocating in a loop the destination would be a region that
| will grow dynamically like an arena allocator. Since refs are
| typed, ref elements of the same type will be allocated next to
| each other in memory.
|
| You don't touch on it, but there are some more difficult cases
| with lifetime inference as well. Namely branching the compiler
| must decide whether or not to extend a refs lifetime. This and
| a lack of granularity in container types are known problems
| with region inference (they lead to more memory than necessary
| being used by assuming the longest lifetimes), and are things I
| hope to tackle.
| jfecher wrote:
| Hello, author here! As the website says, the compiler itself is
| still in a very early state where basic things like functions,
| types, traits, inference, monomorphisation, and codegen are
| implemented. The fun stuff of algebraic effects, lifetime
| inference, and refinement types are not however. Though I can
| elaborate on implementation strategies of these for anyone
| curious. For example, algebraic effects in existing languages can
| be quite slow at runtime due to the use of continuations, the
| translation to a monad stack, and/or dynamically finding handlers
| at runtime. It is my plan to monomorphise them away and inline
| all these cases as in the following paper (1). With this,
| handlers are inlined into functions, continuations are normal
| closures, and continuations that aren't called multiple times are
| inlined.
|
| (1) Zero-cost Effect Handlers by Staging:
| http://ps.informatik.uni-tuebingen.de/publications/schuster1...
| vanderZwan wrote:
| As a person named Job I'm a little confused by the second
| example on the website, but that's probably my own
| interpretation bias :p
|
| Joking aside, looks cool, good luck with the project!
| funny_falcon wrote:
| I'd really like to find "the low-level functional language"
| without "the fun stuff". Or at least it should have simple and
| boring subset that is usable without "the fun stuff".
|
| Something like Caml Light - precursor to Ocaml - but with
| translation to C instead of bytecode, so it will be fast and
| will have comfortable integration with C libraries.
| cardanome wrote:
| > fun stuff of algebraic effects, lifetime inference, and
| refinement types
|
| > I'd really like to find "the low-level functional language"
| without "the fun stuff"
|
| But current stable Ocaml has neither of the "fun stuff"
| mentioned and compiles to native code. So isn't that exactly
| what you want?
|
| It doesn't even need lifetime analysis because automatic
| garbage collection be praised.
|
| And algebraic effects are awesome. Sure they are not
| mainstream yet but conceptional there is a strong parallel to
| the Common Lisp condition system which is quite established.
| Not sure why you wouldn't want to have them. Also it is still
| a long way until we will see them used in user facing stuff
| in Ocaml.
| funny_falcon wrote:
| Ocaml doesn't compile to C. Sometimes having translated C
| source is major gain.
|
| I'm not Ocaml implementation expert, but I suppose Ocaml
| exception handling and garbage collector could be tricky to
| aware about when one extend or embed Ocaml.
|
| To be honestly, my fellow did Ocaml embedding once. He made
| able to load natively-compiled plugins written in Ocaml
| into C based software. And it worked. I didn't dig into
| details, though. There was at least one bug regarding
| garbage collection (and it happens that I fixed it).
| Drup wrote:
| `ocaml-ctypes` currently supports "reverse-bindings"
| (making OCaml functions available from C) out-of-the-box
| and mostly takes care of the intersection you are talking
| about, so this already works quite well.
|
| The only gain from emiting C code is portability to weird
| architecture that would be covered by C compilers but not
| the OCaml one; which is arguably a pretty niche use-case.
| throwaway17_17 wrote:
| This was basically my motivation when I designed the language
| I use for my day-to-day tasks where I program. I just wanted
| a 'functional' sort-of language that compiled to C.
|
| Even though I'll never release it, it was a rewarding
| theoretical and practical exercise.
|
| As an aside, I really love how detailed and example-ful
| Ante's site is. It looks like a 'fun' language and I hope the
| creator keeps working on it.
| funny_falcon wrote:
| May be it is time for "coming out"? (Excuse me for the
| joke).
|
| Really, perhaps your language is The Next One.
| jfecher wrote:
| This is definitely an interesting thought. My thinking is
| that "the fun stuff" tends to help make the language more
| functional, so removing it you are left with a more
| imperative language resembling a C clone with traits and type
| inference. Then if you want easier C interop you must remove
| traits and either remove modules as well or provide a
| standard method of mangling module names into function names.
| At that point I think you may as well use an existing "better
| C" language like Odin, Zig, Jai, C3, etc.
| [deleted]
| trumpeta wrote:
| what makes Ante low level? Just from a cursory look over
| the website seems pretty high level to me.
| jfecher wrote:
| My definition of low level is no tracing GC, values are
| unboxed by default, and users still have control to do
| low level things (raw pointers, other unsafe operations)
| when needed, even if it is not the default.
| rayiner wrote:
| Very neat project! I noticed that Ante doesn't have explicit
| region type declarations. As I recall, existing algorithms
| implemented for ML can sometimes infer very large regions which
| causes memory usage to balloon. It looks like smart pointers
| are part of the plan to address that possibility, but I'd love
| to hear more about your thoughts on memory management.
| jfecher wrote:
| Correct, a key goal is to have no explicit region/lifetime
| annotations. There have been several papers on region
| inference after the originals by Tofte & Taplin, all
| attempting to refine the original analysis by inferring
| shorter lifetimes. First by analyzing when a region can be
| safely emptied and re-used, then by abandoning the stack
| discipline, etc. Unfortunately, none of these are viable in a
| real program in my opinion. Although they each infer shorter
| lifetimes in a few cases the core problem of "collections
| will unify the lifetime variables of all elements in the
| collection" and "branching on a value and conditionally
| returning it extends its lifetime, even if the branch was not
| taken" remain unsolved.
|
| An ideal solution to me needs to solve these problems. Since
| there is already a large body of research trying to address
| this on the static side and failing, I believe it needs to be
| solved with runtime checks. The specifics of which I'm still
| exploring but its worth mentioning these would only be
| necessary to tighten existing lifetimes so one can envision
| annotations or compiler options to elide these if desired.
| Lifetime inference in MLKit (and I believe ante as well)
| tends to speed things up by turning more dynamic allocations
| into stack allocations, so there is some room there for
| runtime checks without making the result more expensive than
| the version with dynamic allocation I believe.
| raphlinus wrote:
| Hey, thanks for calling my work "not viable!" (Co-author of
| "Better Static Memory Management" here)
|
| Seriously, this looks promising and I'm very interested to
| see where it goes.
| rayiner wrote:
| I'm curious what the rationale is for not making regions
| explicit in the source code. It seems like a downside of
| region inference for a systems language is the
| unpredictability of the inferred regions.
| charleskinbote wrote:
| Thanks for sharing!
|
| The dot product example gave me pause because map2 seems to be
| the same as zipWith. Does that exist in Ante? Without context I
| might have thought map2 was going to act as bimap. Take that
| for what you think it's worth :)
|
| Also I might be having a brain fart -- but isn't the dot
| product in your example equal to 32?
| jfecher wrote:
| map2 is indeed another name for zipWith. I believe I got that
| name from Racket if memory serves. Compared to zipWith I like
| its symmetry with the 1 argument map. I also wasn't aware of
| bimap! I can't seem to find a function of that name online,
| though I did find the BiMap haskell package, is that what
| you're referring to?
|
| And yes, the dot product should be 32, thank you :)
| pwm wrote:
| Not OP but they meant https://hackage.haskell.org/package/b
| ase-4.16.1.0/docs/Data-...
|
| As you have the normal map function for Functors (using
| Haskell): > :t fmap fmap :: Functor f
| => (a -> b) -> f a -> f b
|
| you can have bimap for Bifunctors: > :t
| bimap bimap :: Bifunctor p => (a -> b) -> (c -> d) ->
| p a c -> p b d
|
| which specialised to pairs is: > :t bimap
| @(,) bimap @(,) :: (a -> b) -> (c -> d) -> (a, c) ->
| (b, d)
| eruditely wrote:
| I was wondering why the dot product wasn't 32! Good thing I
| checked. Also this seems like a cool new language, I need
| to learn a functional language, I wonder how far this
| language will go in development.
| PowerBallZed wrote:
| I don't see a LICENSE file anywhere in the repo or the website.
| Nobody is allowed to use Ante?
| airstrike wrote:
| > Nobody is allowed to use Ante?
|
| :eyeroll:
| postingforonce wrote:
| https://docs.github.com/en/repositories/managing-your-
| reposi...
|
| >However, without a license, the default copyright laws
| apply, meaning that you retain all rights to your source
| code and no one may reproduce, distribute, or create
| derivative works from your work.
| airstrike wrote:
| What do you think is more likely, that the author forgot
| to add a LICENSE file or that he actually doesn't intend
| for ANYONE to use the language he created? Give me a
| break
| b3morales wrote:
| Looks like a great start! Your attention to programmer
| ergonomics is admirable. Added to my weekend hacking reading
| list.
| vanderZwan wrote:
| As a person named Job I'm a littly confused by the second
| example on the website, but that's probably my own
| interpretation bias :p
|
| Joking aside, looks cool, good luck with the project!
| Sentient-HN wrote:
| Interesting, well have to see how it develops in the future.
| Although, I normally dislike languages without {}, but I suppose
| that is just preference.
| timbit42 wrote:
| I hate having to press Shift to get braces. Let's use square
| brackets which require no modifier key.
| throwaway17_17 wrote:
| Just a question: but are you considering the frequency of
| usage for a particular symbol in that suggestion? If {} is
| for opening/closing functions or code blocks/scope AND [] is
| for making arrays, lists, or some other data structure: do
| you only want them switched where, idiomatically, where there
| are less function definitions vs instances of the data
| structure? Or is it an overall objection to any shift+key
| operator in a basic syntax?
| zonotope wrote:
| Why, why, why with the significant whitespace? That makes it a
| non-starter for me.
| rmrfchik wrote:
| Age of "var, val" is over. Now age of "significant whitespace".
| If seriously, seems like indent syntax is default to go
| nowadays. Python has way too big influence on language
| designers.
| pjmlp wrote:
| ML languages did it first.
| jfecher wrote:
| I'm not a fan of python (it handles significant whitespace
| somewhat poorly). Cases like mixed tab-space whitespace and
| single-line only lambdas are python-specific problems for
| example.
|
| I chose it mainly because I like the style and I haven't
| found it to be an issue in practice yet, especially with
| flexible rules for line continuations. The biggest detriment
| to me is the lack of auto-formatters for indentation.
|
| Edit: I'll add to this a point I haven't seen discussed
| before about functional languages using the `f a b` syntax
| specifically. Without significant whitespace you cannot
| really have semicolon ellision with this syntax since almost
| any line may be interpreted as a function application of the
| next line. E.g. `a = b + c` and `foo 32` as subsequent lines
| would be interpreted as `a = b + (c foo 32)`, hence why ocaml
| requires semicolons. For some people semicolons are fine, but
| they're just noise to me :)
|
| There's more detail on the specific significant whitespace
| scheme used on the website here:
| https://antelang.org/docs/language/#significant-whitespace
| strbean wrote:
| Regarding single-line only lambdas, I think that isn't an
| intractable issue with the grammar.
|
| Guido van Rossum basically said parsing them is hard, and
| he doesn't like lambdas in the first place, so it won't
| happen. [0]
|
| Similar to "tail call optimization encourages dirty FP
| nerds" logic. Terrible decisions IMHO.
|
| [0]:
| https://www.artima.com/weblogs/viewpost.jsp?thread=147358
| jfecher wrote:
| Agreed. Other whitespace sensitive languages like
| Haskell, F#, and Ante handle multiline lambdas just fine
| rmrfchik wrote:
| You claim, significant whitespace helps mitigate "goto
| fail" errors. As for me, indent syntax allows more human
| errors during refactoring. When moving code blocks around,
| you can easily misplace whitespaces and compiler/editor has
| no ability to help you.
| jfecher wrote:
| This is definitely an issue, but isn't one I run into
| often. Just like if I'm pasting code in rust I need to
| make sure it is inside or outside an if statement, I
| likewise need to ensure pasted code in ante is
| inside/outside the if statement by looking at its
| indentation relative to the previous line. The compiler
| can help somewhat here, just not a lot. If your
| indentation is not on an existing indentation level you
| will get an error since you cannot randomly start indent
| blocks in your code. I do think the biggest detractor of
| indentation-sensitive syntax in general is the loss of
| auto-formatting indentation though.
| PontifexMinimus wrote:
| That and Haskell.
| hyperhopper wrote:
| You forgot to say why or provide any rationale.
| ricardobeat wrote:
| Note to the website authors: the carousel with examples is not
| usable on mobile - trying to scroll the code swipes away to the
| next item. Would be better as just a series of example blocks.
| sexy_seedbox wrote:
| On mobile, I just go to [?] then "Desktop site", much wider
| display.
| qwertox wrote:
| Yet it's great on desktop. What I liked the most about it was
| that it doesn't auto-advance.
|
| Generally, the entire site has a really nice design, at least
| on desktop. One of the prettiest I've seen.
| anentropic wrote:
| Even on desktop I wanted to be able to swipe (scroll gesture)
| or arrow key to advance the slide, but have to click the
| little dot icons
| qwertox wrote:
| I was able to drag it sideways with the mouse.
| [deleted]
| PontifexMinimus wrote:
| Carousels should simply be abolished. Just have paragraphs of
| text/image, one after the other.
|
| Also, black text on white background is easier to read.
|
| And don't make your web design to complex for Firefox reader
| mode to work: if you do, that's a sign you're doing it wrong.
| jfecher wrote:
| Ah, yes. The carousel has been a source of frustration
| especially on mobile for designing the website. Perhaps
| disabling swiping to scroll on the carousel on mobile would
| help. Or defaulting to use a dropdown to select the example on
| mobile instead.
| messe wrote:
| It's broken on desktop as well. I was trying to select some
| example code to copy and paste it in a comment here, but that
| just click on it drags the carousel rather than allowing me to
| select text.
| jasonhansel wrote:
| This is cool! I _really_ like the syntax for algebraic effects,
| which would be great for mocking. I also like the way that
| "impls" can be passed either explicitly or implicitly; the lack
| of global coherence seems like a reasonable tradeoff. Have you
| thought about using existential types to support dynamic dispatch
| (like Rust's "trait objects")?
|
| That said, I'm a bit skeptical of the utility of refinement types
| (and dependent types in general). They greatly increase a
| language's complexity (by obscuring the type/value distinction
| and making type inference and checking more complicated). I'm not
| sure the benefits are worth it, particularly because of the so-
| called "specification problem."
| jfecher wrote:
| Thanks! IMO it is very important effects have an easy syntax to
| read/understand since they will be both used pervasively and be
| foreign to most new users. I do have an idea for "traits as
| types" which would cover static and dynamic dispatch:
| https://antelang.org/docs/ideas/#traits-as-types. So a use like
| `print (x: Show) : unit = ...` would be usable with static or
| dynamic dispatch and more complex cases like `print_all (x: Vec
| Show) : unit = ...` would use dynamic dispatch always. Perhaps
| this may be too confusing though, and there are other
| considerations as well which is why its only an idea for now.
|
| Your concern on refinement types is definitely valid and is
| something I've been thinking about. They are certainly useful
| in some cases like array indices or passing around certain
| predicates but whether these are useful enough to offset the
| implementation cost and brain tax is an open question.
| dixego wrote:
| What is the specification problem? I don't think I've heard the
| term before.
| myco_logic wrote:
| This looks really lovely, I look forward to following the
| maturation of Ante in the future. I've often thought that the
| niche of a general purpose low-level FP language was a promising
| space. The only other langs that fit in there are ATS[1], which
| is notoriously complex/difficult-to-learn, and Futhark[2], which
| is more GPGPU/scientific-computing specific.
|
| We've got Rust, which is essentially a C-style lang that steals
| all kinds of goodies from the ML family; it's nice to see Ante as
| a kind of inverse to this: i.e. an ML style lang that borrows
| some of the nice bits from traditional imperative langs (and
| hopefully maintains their performance characteristics).
|
| Looking through the language tour there already seem to be a
| plethora of very sensible/ergonomic design decisions here. The
| 'loop' and 'recur' keywords are a great feature, making a nice
| little functional nod towards while loops. As a long-time Haskell
| user the explicit currying initially turned me off, but after
| seeing a couple examples I can see how it's actually a very
| reasonable solution, and moreover the ability to curry out of
| order is really nice instead of having to use 'flip' or similar
| combinators (as a side note, the explicit currying reminds me a
| bit of APL's a and o arguments in dfns, a feature I'd love to see
| pop up more). The paired tuples also seem like they'd be a
| pleasure to use; certainly a bit more flexible than tuples in
| other ML style langs. Making '.' the pipeline operator is also a
| smart bit of syntax, and I can see it being very accessible to OO
| programmers in that it looks (and acts) like the method chaining
| they're familiar with. Refinement Types seem like a good
| alternative to full on dependent typing (ATS has an analogous
| (and more general) proof system for ensuring things like array
| indices are valid (an essential feature in a low level FP lang),
| but it involves threading those proofs through your program which
| seems much more clunky than what's presented here).
|
| Overall I'm really excited to see where Ante goes, it seems like
| a very pragmatic functional language, something the world
| definitely needs more of...
|
| [1]: http://www.ats-lang.org/Home.html
|
| [2]: https://futhark-lang.org/
|
| EDIT: In regards to low level FP langs there's also Carp, Erik
| Svedang's nifty little statically typed Lisp. It's like Scheme
| with a Rust-y memory model:
|
| https://github.com/carp-lang/Carp
| jfecher wrote:
| Author here, thank you for your interest! There are definitely
| a lot of small design decisions that add up in language design,
| from using `|` for match cases to avoid double-indentation, to
| the use of pairs over tuples, to how methods are resolved and
| chained, it is nice to have others appreciate the small things
| sometimes :).
|
| I'll avoid posting it here but if you do want to follow ante's
| development the best place is on its discord which is linked on
| the github page.
| [deleted]
___________________________________________________________________
(page generated 2022-06-17 23:00 UTC)