[HN Gopher] The Swift compiler is slow due to how types are infe...
       ___________________________________________________________________
        
       The Swift compiler is slow due to how types are inferred
        
       Author : paraboul
       Score  : 184 points
       Date   : 2024-06-12 17:59 UTC (5 hours ago)
        
 (HTM) web link (danielchasehooper.com)
 (TXT) w3m dump (danielchasehooper.com)
        
       | jshier wrote:
       | They never will, since it's also one of Swift's greatest
       | strengths. What they may, eventually, do is dedicate the
       | resources to minimize the negative aspects of the system while
       | documenting clear ways to mitigate the biggest issues.
       | Unfortunately Apple's dev tools org is chronically under
       | resourced, which means improvements to the inference system and
       | its diagnostics come and go as engineers are allowed to work on
       | it. Occasionally it will improve, only to then regress as more
       | features are added to the language, and then the cycle continues.
        
         | fooker wrote:
         | >chronically under resourced
         | 
         | This is very true, Apple sees compiler jobs as a cost center.
        
         | tmpz22 wrote:
         | I think this is a unfair characterization. Yes Apple's
         | developer ecosystem has a lot of fair complaints, I've
         | personally run into the issues in this article particularly
         | with newer APIs like SwiftData's #Predicate macro. But we just
         | saw two days ago a lot of concerted issues to fix systemic
         | problems like XCode the editor, or with compile times with
         | Explicit Module improvements.
         | 
         | I think you're painting with too heavy a brush. Apple clearly
         | is dedicating resources to long-tail issues. We just saw
         | numerous examples two days ago at WWDC24.
        
           | jshier wrote:
           | No, this is just the typical Apple cycle I alluded too.
           | Improvements are saved up for WWDC, previewed, then regress
           | over the next year as other work is done, only for the
           | process to repeat. They've demonstrated small improvements
           | practically every year, yet the compiler continues to regress
           | in build performance. Notably, the explicit module build
           | system you mentioned regresses my small project's build time
           | by 50% on an M1 Ultra. And even without it, overall build
           | performance regressed 10 - 12% on the same project.
        
           | plorkyeran wrote:
           | Explicit modules make build times _worse_ , not better. Yes,
           | this is the exact opposite of what Apple claims they do, and
           | I am genuinely baffled by the disconnect. Usually Apple's
           | marketing is at least directionally true even if they
           | overstate things, but in this case the project appears to
           | have entirely failed to deliver on what it was supposed to do
           | but it's still being sold as if it succeeded.
           | 
           | On top of that, the big thing we _didn 't_ see announced this
           | year was anything at all related to addressing the massive
           | hit to compile times that using macros causes.
        
       | fooker wrote:
       | This is not a fixable flaw. Solving these constraints efficiently
       | can definitely get you a Turing award, it's basically the SAT
       | problem.
       | 
       | And without this type system, swift is just Objective C in a
       | prettier syntax, so Apple has to bite the bullet and bear with
       | it.
        
         | coldpie wrote:
         | It sort of is fixable, though. If you think about it, the
         | problem is a bunch of functions are all mapped to one of a
         | small set of names: +*/, etc. That is, the operators. If we
         | didn't try to cram all this functionality into a tiny handful
         | of symbols because of some weak analogy they have with basic
         | math operations[1], then the compiler would have far fewer name
         | conflicts to try to disambiguate, and the problem goes away on
         | its own. Like yeah, the problem still exists if we made a few
         | dozen functions all called " _a_ ", but the solution is to _not
         | do that_ , not give up on an otherwise fine type system.
         | 
         | I'm convinced operator overloading is an anti-feature. It
         | serves two purposes:
         | 
         | 1) to make a small set of math operations easier to read (not
         | write), in the case where there are no mistakes and all readers
         | perfectly understand the role of each operator; and,
         | 
         | 2) to make library developers feel clever.
         | 
         | Operator-named functions are strictly worse than properly named
         | functions for all other uses. Yes, yes, person reading this
         | comment, I know you like them because they make you feel smart
         | when you write them, and you're going to reply with that one
         | time in university that you really needed to solve a linear
         | algebra problem in C++ for some reason. But they really are
         | terrible for everyone who has to use that code after you.
         | They're just badly named functions, they're un-searchable, they
         | make error messages unreadable, and they are the cause the
         | naming conflict that is at the root of the linked blog post.
         | It's time to ditch operator overloading.
         | 
         | [1] Or because they look like the same symbol used in some
         | entirely other context, god, please strike down everyone who
         | has ever written an operator-/ to combine filesystem paths.
        
         | shagie wrote:
         | > ... it's basically the SAT problem.
         | 
         | I am reminded of this classic CodeGolf.SE challenge of "P = NP"
         | https://codegolf.stackexchange.com/a/24419
         | 
         | The problem was set as:
         | 
         | > Your task is to write a program for SAT that executes in
         | polynomial time, but may not solve all cases.
         | 
         | To which Eric Lippert wrote:
         | 
         | > "Appears" is unnecessary. I can write a program that really
         | does execute in polynomial time to solve SAT problems. This is
         | quite straightforward in fact.
         | 
         | And has a spoiler that starts out as:
         | 
         | > You said polynomial runtime. You said nothing about
         | polynomial compile time. This program forces the C# compiler to
         | try all possible type combinations for x1, x2 and x3, and
         | choose the unique one that exhibits no type errors. The
         | compiler does all the work, so the runtime doesn't have to. ...
         | 
         | Unfortunately the blog post which it was linked to that went
         | into greater detail at
         | https://devblogs.microsoft.com/ericlippert/lambda-expression...
         | is no longer available (even via wayback).
        
         | fweimer wrote:
         | I know that historically, we were taught that NP-complete
         | problems are unsolvable for interesting problem sizes, but that
         | was already a bit of lie in when I went to university. (TSP was
         | presented as a warning example, but even back then, very
         | efficient approximations existed.)
         | 
         | These days, when you have a SAT-like problem, you're often done
         | because you can throw a SAT solver at it, and it will give you
         | an answer in a reasonable time. Particularly for such small
         | problems like this one here. We routinely solve much larger and
         | less structured SAT instances, e.g. when running package
         | managers.
        
           | cerved wrote:
           | depends what you consider interesting and/or acceptable
           | solution I suppose
        
         | Dylan16807 wrote:
         | Both your points can be addressed by: It's not a choice between
         | infer everything and infer nothing.
        
       | irdc wrote:
       | One could argue that anything that anything that makes the
       | development process itself more efficient, as opposed to the
       | compiling, is worth it since programmers themselves ain't getting
       | any faster anytime soon, but timing out after more than 40
       | seconds on a state-of-the-art CPU because of a handful of lines
       | is just ridiculous.
        
         | jandrese wrote:
         | At the very least it seems like the compiler could math out how
         | many possible states it is about to check and if the value is
         | unreasonable instantly error out instead of trying to chew on
         | it for over half a minute before giving up.
        
         | rudedogg wrote:
         | These type inference landmines are all over the place with
         | SwiftUI too. I run into them with View/Shape/Color frequently
        
         | cerved wrote:
         | I'm not sure I would draw the same conclusion, stricter
         | requirements on typing is less cumbersome these days with
         | better auto complete and LLMs
        
       | liuliu wrote:
       | The math type inference example makes the usual claim that "what
       | if Swift can replace Python" a non-starter. As someone who have
       | to deal with this on frequent basis, it is pretty sad.
       | 
       | (I maintains s4nnc and a fork of PythonKit).
        
         | manmal wrote:
         | Does this improve when declaring all variables with explicit
         | types before using them in expressions?
        
           | liuliu wrote:
           | Yeah, but people do like to write y = .log(x * 0.1) +
           | (1.24).squareRoot() type of expressions for what Python did
           | best (NumPy and PyTorch).
        
         | wiseowise wrote:
         | > what if Swift can replace Python
         | 
         | What a ridiculous statement. I'm willing to bet everything I
         | have in life that this is never going to happen.
        
           | liuliu wrote:
           | In retrospective, yes. But Swift for TensorFlow and similar
           | projects received much attention at a time. (Also see Mojo as
           | another attempt).
        
       | temp123789246 wrote:
       | Does anyone know why, anecdotally, it seems like the slowness of
       | type inference is more of a pain point in Swift than in Ocaml,
       | Rescript, Purescript, Haskell, etc?
        
         | tines wrote:
         | Is it that Haskell, at least, doesn't support overloading in
         | the same way as Swift? I don't know either of them well enough
         | to be sure.
         | 
         | It seems like there's a combinatorial explosion of possible
         | overloads in Swift, whereas if you implement a function with
         | the same ergonomics in Haskell (e.g. a printf-like function),
         | the only thing the compiler has to do is ask "Does type X have
         | an implementation for typeclass Show? Yes? Done."
         | 
         | Essentially Haskell solved this overload inference problem in
         | the same way that iterators solve the M*N problem for basic
         | algorithms: convert all these disparate types to a single type,
         | and run your algorithm on that.
        
           | ackfoobar wrote:
           | "Does type X have an implementation for typeclass Y" isn't
           | always easy to answer.
           | 
           | https://aphyr.com/posts/342-typing-the-technical-interview
        
             | tines wrote:
             | That post, while awesome (as is the rest of aphyr's stuff),
             | is a lot to wade through to get to the point you're trying
             | to convey. Can you spell it out for me?
        
               | ackfoobar wrote:
               | That typeclass resolution can encode some heavy
               | computation, the example being n-queens in the article.
        
               | tines wrote:
               | That's only the case when you turn on the "enable
               | arbitrary computation in typeclasses" flag, so I'd say
               | it's not much of a worry.
        
         | steinuil wrote:
         | I'm not an expert on the theory, but OCaml has a _very_ fast
         | compiler and while it is (almost) capable of fully
         | reconstructing the types from a program with no annotations, it
         | doesn 't have to deal with ad-hoc polymorphism and takes some
         | shortcuts like weak polymorphism when it gets too hard:
         | https://www.ocaml.org/manual/5.2/polymorphism.html
        
           | tombert wrote:
           | Wait what? In Haskell the types are usually directly
           | inferrable from the arguments they're being used as, and when
           | you put a type annotation it's usually not explicit types
           | (Num a => a -> b -> c).
           | 
           | I almost never bother putting types in Haskell, unless I want
           | to guarantee some constraint, in which case I typically use
           | typeclasses. Maybe I'm just weird but I don't think so. One
           | of the very few things I actually like about Haskell is how
           | good the type inference is.
        
           | fweimer wrote:
           | Try this:                   let f0  = fun x -> (x, x) in
           | let f1  = fun y -> f0(f0 y) in             let f2  = fun y ->
           | f1(f1 y) in             let f3  = fun y -> f2(f2 y) in
           | let f4  = fun y -> f3(f3 y) in             let f5  = fun y ->
           | f4(f4 y) in             f5 (fun z -> z)
           | 
           | Lifted from https://dl.acm.org/doi/pdf/10.1145/96709.96748
           | via Pierce, Types And Programming Languages.
        
             | _flux wrote:
             | But that's just a type that is huge. I didn't want to wait
             | for the evaluation, but if I drop the f5 out, I got a type
             | that is 1.6 megabytes long when printed without spaces.
             | 
             | It's still very fast for "normal size" types. That reduced
             | version compiles in 151 milliseconds.
        
         | nyrikki wrote:
         | My guess is different extensions to Hindley-Milner type system,
         | which is EXPTIME-complete in the worst case.
         | 
         | HM isn't bidirectional in the special case, so probably the
         | features they added vs the top level universal quantifier type
         | that has pathological low time complexity.
        
       | peppertree wrote:
       | I have a feeling it's going to be nearly impossible to replace it
       | without breaking a lot of existing code, since the syntax will
       | have to be a lot more explicit.
        
       | ajkjk wrote:
       | The times here seem unreasonably bad even with the bad algorithm.
       | Something else has got to be going on. Maybe kind of hidden
       | factorial complexity when it tries every combination?
        
         | novok wrote:
         | There are a bunch of other chokepoints, but type inference is a
         | big part of it:
         | https://github.com/apple/swift/blob/main/docs/CompilerPerfor...
         | 
         | Another thing I would add to swift as a flag is to make imports
         | based on specific files vs. an abstract "module", there is a
         | lot of repeated work that happens because of that last time I
         | looked.
        
         | floxy wrote:
         | Yes, I've never written a line of Swift, but these cases don't
         | seem to be of the usual variety that cause Hindley-Milner to
         | blow up. It seems like the Swift compiler source is available,
         | and these test cases are small. This is encouragement for
         | someone to spend a small amount of time digging into this, just
         | for the curiosity of it. And I mean, just something like, fire
         | it up in the debugger, start the compiler on one of these test
         | cases, interrupt the when it looks like it is "hung", but
         | before it times out. Step through the code a bit, to identify
         | the handful of functions that that we're looping through, and
         | then report out what you find, and your best guess, the
         | algorithm is implemented correctly and is hopelessly
         | intractable, or "hey, didn't we already check down this branch
         | previously?". I'll give you my next upvote on HN.
         | 
         | https://static.aminer.org/pdf/20170130/pdfs/popl/o8rbwxmj6h2...
         | 
         | https://github.com/apple/swift
        
       | Decabytes wrote:
       | I'm a big fan of the idea of Swift as a cross platform language
       | general purpose langauge, but it just feels bad without Xcode.
       | The Vscode extension is just okay, and all of the
       | tutorials/documentation assumes you are using Xcode.
       | 
       | A lot of the issues that Swift is currently facing are the same
       | issues that C# has, but C# had the benefit of Mono and Xamarin,
       | and in general more time. Plus you have things like JetBrains
       | Rider to fill in for Visual Studio. Maybe in a few years Swift
       | will get there, but I'm just wary because Apple really doesn't
       | have any incentive to support it.
       | 
       | Funnily enough, the biggest proponent of cross platform Swift has
       | been Miguel De Icaza, Gnome creator, and cofounder of Mono the
       | cross platform C# implementation pre .net core. His Swift Godot
       | project even got a shout out recently by Apple
        
         | giancarlostoro wrote:
         | The only thing holding it back is Apple not investing into
         | making it happen. Swift is in a weird spot where it has so much
         | potential, but without investment in the tooling for other
         | platforms (which is uncommon for Apple in general) it just wont
         | happen, at least not as quickly as it could.
        
           | refulgentis wrote:
           | I wouldn't describe Swift itself as having so much potential:
           | I _loved_ it and advocated for it, for years. After getting
           | more experience on other platforms and having time to watch
           | how it evolved, or didn 't as per TFA, it's...okay to
           | mediocre compared to peers - Kotlin, Dart, Python come to
           | mind.
           | 
           | If Foundation was genuinely cross platform and open source,
           | that description becomes more plausible for at least some
           | subset of engineers. * (for non-Apple devs, Foundation ~=
           | Apple's stdlib, things like dateformatting)
           | 
           | I don't mean to be argumentative, I'm genuinely curious what
           | it looks like through someone else's eyes and the only way to
           | start that conversation is taking an opposing position.
           | 
           | I am familiar with an argument it's better than Rust, but I'd
           | very curious to understand if "better than" is "easier to
           | pick up" or "better at the things people use Rust for": i.e.
           | I bet it is easier to read & write, but AFAIK it's missing a
           | whole lot of what I'll call "necessary footguns for
           | performance" that Rust offers.
           | 
           | * IIRC there is a open source Foundation intended for Linux?
           | but sort of just thrown at the community to build.
        
             | TheFragenTaken wrote:
             | I'm not deep in Swift, but this would make it seem a
             | reimplementation of Foundation is open source:
             | https://github.com/apple/swift-corelibs-
             | foundation/tree/main
        
             | sunnybeetroot wrote:
             | Im surprised to see Python in that list. Swift being type
             | safe and Python not puts Swift miles ahead.
        
               | refulgentis wrote:
               | One of those awkward things where I don't like it, and
               | wouldn't go back to nulls that blow up. But as far as
               | being the right tool/accessible it ended up winning use
               | cases where I expected scripting and Playgrounds to have
               | mindshare
        
               | jb1991 wrote:
               | Playgrounds is so painfully slow that you can't really
               | "play" with it at all.
        
               | wiseowise wrote:
               | Mypy is non-nullable, as far as I know. Unless you do
               | Type | None. (Or was it TS?)
        
               | wiseowise wrote:
               | Python has Mypy (yes, it's good enough, don't bother
               | arguing with me that's it's not "real") and ecosystem is
               | leagues above anything Swift can offer.
        
             | Apocryphon wrote:
             | What is there to say in favor of Dart? Isn't it a "good
             | enough" middle-of-the-road language?
        
               | refulgentis wrote:
               | Pretty much, which I love. Opinionated TL;Dr: Kotlin
               | without the duplicates of Java classes, or 20 different
               | inscrutable functional operators. Swift without the
               | compile times and architecture astronaut stuff that
               | infected stuff built on it and makes apple reinvent it
               | every 2-4 years, c.f. async/SwiftUI. Genuinely cross
               | platform, both in terms of code* and UI** It's
               | indistinguishable from native in the way that's
               | meaningful to users (note it's AOT on not-web, not JIT,
               | and does a bunch of obvious stuff, use the platform text
               | renderer, scollbars, scroll velocity, etc)
               | 
               | * I'm too old now, 36, and you have no idea how much I
               | roll my eyes internally at hopeful invocations of 'man if
               | only Swift was cross platform / look apple did this so
               | swift is coming cross platform. All the "SwiftUI web is
               | clearly coming" wish casting from big names in the Apple
               | dev community who should have known better broke me.
               | 
               | ** the mish mash of bad, competing solutions to bringing
               | iOS UI cross Apple platforms forfeits the core premise of
               | an Apple-values inheriting dev: I'm infinitely better off
               | having a base UI engine that renders the same stuff on
               | each platform than a shim that renders and acts
               | differently
        
               | wiseowise wrote:
               | It is truly cross platform, unlike Swift, has great UI
               | going for it that you can use everywhere and improved
               | greatly as a language in recent years while Swift
               | continued siloing itself in Mac world.
               | 
               | Dart might not break world records for most innovative or
               | performant general purpose language, but it's a
               | completely different language from 6 years ago.
        
           | smaudet wrote:
           | > The only thing holding it back is Apple not investing into
           | making it happen.
           | 
           | This seems to be a (bad) pattern with Apple, one that Google
           | used to (and still does) get a lot of flack for, this habit
           | of not investing in things and then thing dying slow, painful
           | deaths.
           | 
           | E.g. I remember this criticism being leveraged at e.g. Safari
           | a lot.
           | 
           | But, for better or worse, Apple is not a technology company,
           | really, its a design company. They focus on their cash-cow
           | (iPhone) and main dev tools (macbook) and nearly everything
           | else is irrelevant. Even their arm-laptops aren't really
           | about being a great silicon competitor, I suspect. Their aim
           | is to simplify their development model across
           | phone/laptop/tablet and design seamless things, not make
           | technically great or good things.
           | 
           | The reason(s) they haven't turned (as much) to
           | enshittification probably are that a) it goes against their
           | general design principles b) they have enough to do improve
           | what they have and so release new stuff c) they aren't in a
           | dominant/monopolistic market position where they can suddenly
           | create utter trash and get away with it because there's
           | nothing else.
           | 
           | And yes, they exhibit monopolistic behaviors within their
           | "walled garden", but if they make a product bad enough,
           | people can and will flee for e.g. Android (or possibly even
           | something microsoft-ish). They can't afford to make a
           | terrible product, but they can afford to abandon anything
           | that doesn't directly benefit their bottom line.
           | 
           | Which is why I suppose I generally stopped caring about most
           | things Apple.
        
         | elpakal wrote:
         | I use a swift toolchain with JetBrains' CLion (for CLIs) and
         | it's pretty good and quite refreshing when compared to Xcode.
        
         | JackYoustra wrote:
         | I would say the opposite, actually: at least on large pure
         | swift projects, Xcode grinds to a halt. Many of its features
         | come with unacceptable performance cliffs which are impossible
         | to patch. I ran into a particular problem with the build
         | documentation command recently:
         | https://www.jackyoustra.com/blog/xcode-test-lag
        
       | PaulHoule wrote:
       | One of the interesting tradeoffs in programming languages is
       | compile speed vs everything else.
       | 
       | If you've ever worked on a project with a 40 minute build (me)
       | you can appreciate a language like go that puts compilation speed
       | ahead of everything else. Lately I've been blown away by the "uv"
       | package manager for Python which not only seems to be the first
       | correct one but is also so fast I can be left wondering if it
       | really did anything.
       | 
       | On the other hand, there's a less popular argument that the focus
       | on speed is a reason why we can't have nice things and, for
       | people working on smaller systems, languages should be focused on
       | other affordances so we have things like
       | 
       | https://www.rebol.com/
       | 
       | One area I've thought about a lot is the design of parsers: for
       | instance there is a drumbeat you hear about Lisp being
       | "homoiconic" but if you had composable parsers and your language
       | exposed its own parser, and if every parser also worked as an
       | unparser, you could do magical metaprogramming with ease similar
       | to LISP. Python almost went there with PEG but stopped short of
       | it being a real revolution because of... speed.
       | 
       | As for the kind of problem he's worried about (algorithms that
       | don't scale) one answer is compilation units and careful caching.
        
         | kettlecorn wrote:
         | > One of the interesting tradeoffs in programming languages is
         | compile speed vs everything else.
         | 
         | In the case of Rust it's more of a cultural choice. Early
         | people involved in the language pragmatically put everything
         | else (correctness, ability to ship, maintainability, etc.)
         | before compilation speed. Eventually the people attracted to
         | contribute to the language weren't the sort that prioritized
         | compilation speed. Many of the early library authors reflected
         | that mindset as well. That compounds and eventually it's very
         | difficult to crawl out from under.
         | 
         | I suspect the same is true for other languages as well. It's
         | not strictly a bad thing. It's a tradeoff but my point is that
         | it's less of an inevitability than people think.
        
           | throwaway894345 wrote:
           | I think most people haven't used many languages that
           | prioritize compilation speed (at least for native languages)
           | and maybe don't appreciate how much it can help to have fast
           | feedback loops. At least that's the feeling I get when I
           | watch the debates about whether Go should add a bunch more
           | static analysis or not--people argue like compilation speed
           | doesn't matter at all, while _actually using Go_ has
           | convinced me that a fast feedback loop is enormously valuable
           | (although maybe I just have an attention disorder and
           | everyone else can hold their focus for several minutes
           | without clicking into HN, which is how I got here).
        
             | DanielHB wrote:
             | A fast hot code swap of a module is a more important
             | feature in my opinion, but it is somehow even harder to
             | find these days than a fast compilation speed language.
             | 
             | But ideally I want both.
        
             | sans-seraph wrote:
             | In the case of Rust the fast feedback loop is facilitated
             | by the `cargo check` command which halts compilation after
             | typechecking. Unlike in Swift the typechecking phase in
             | Rust is not a significant contributor to compilation times
             | and so skipping code generation, optimization, and linking
             | is sufficient for subsecond feedback loops.
        
               | throwaway894345 wrote:
               | I mean, you still need to run code at the end of the day.
               | Yeah, the type checker will update your IDE quickly
               | enough, but you still need to compile and link at least a
               | debug build in order to meaningfully qualify as a
               | feedback loop IMHO.
        
               | sans-seraph wrote:
               | This was my initial mindset as someone whose background
               | lies in untyped languages, but after time with Rust I no
               | longer feel that way. My feeling now is that seeing a
               | Rust codebase typecheck gives me more confidence than
               | seeing a Python or Javascript codebase pass a test suite.
               | Naturally I am still an advocate for extensive test
               | suites but for my Rust code I only run tests before
               | merging, not as a continuous part of development.
               | 
               | To give an example, in the past week I have ported over a
               | thousand lines of C code to a successor written in Rust.
               | During development compilation errors were relatively
               | frequent, such as size mismatches, type mismatches,
               | lifetime errors, etc. I then created a C-compatible
               | interface and plugged it into our existing product in
               | order to verify it using our extensive integration test
               | suite, which takes over 30 minutes to run. It worked the
               | first time. In order to ensure that I had not done
               | something wrong, I was forced to insert intentional
               | crashes in order to convince myself that my code was
               | actually being used. Running that test suite on every
               | individual change would not have yielded a benefit.
        
               | throwaway894345 wrote:
               | > This was my initial mindset as someone whose background
               | lies in untyped languages
               | 
               | Yes, I understand and agree regarding Rust vs dynamic
               | languages, but to be clear my remark was already assuming
               | type checking. I still think you need a full iteration
               | loop even if a type checker gets you a long ways relative
               | to a dynamic language.
        
         | throwaway894345 wrote:
         | Most languages are forced to choose between tooling speed and
         | runtime speed, but Python has historically dealt with this
         | apparent dichotomy by opting for neither. ([?]#_#)
        
           | janice1999 wrote:
           | Python real strength is the speed it can be taught, read and
           | written.
        
             | KerrAvon wrote:
             | It's not, actually, any more than any other language. That
             | was Guido's original plan, but show a page of modern Python
             | code to someone who's never seen it before and they'll run
             | screaming. There is a minimal subset where you can say it
             | reads like pseudocode, but that's a very limited subset,
             | and, like AppleScript, you have to have a fair amount of
             | knowledge to be able to write it fluently.
        
               | wiseowise wrote:
               | It is. Compared to other languages (short of JS without
               | Symbols and async/await/promises mumbo jumbo or lisp) it
               | has much easier entry barrier.
        
               | throwaway894345 wrote:
               | Python absolutely has async/await/promises, and it's
               | actually quite a lot worse than JavaScript in this regard
               | because Python _also_ has synchronous APIs and _no_
               | tooling whatsoever to make sure you don't call a sync API
               | in an async function thereby blocking your event loop
               | (which, if your application is a networked service with
               | any amount of traffic at all, will typically result in a
               | cascading failure in production). I'm no great fan of
               | JavaScript, and I've written _wayyyyy_ more Python than
               | JS, but async/await/promises is exactly the wrong example
               | to make the case that Python is better.
        
               | wiseowise wrote:
               | Structured concurrency with Exception groups make it
               | untouchable for JS. If JS implements structure
               | concurrency like Python, Java, Kotlin, then maybe it can
               | be viable.
        
               | N1H1L wrote:
               | I am more and more convinced that type checked Python is
               | not always the best idea. The people who are the most
               | virulently pro type checking in Python are not data
               | science folks.
               | 
               | Python's type ecosystem's support for proper type checked
               | data science libraries is abysmal (`nptyping` is pretty
               | much the most feature complete, and it too is far from
               | complete), and has tons of weird bugs.
               | 
               | The Array API standard (https://data-apis.org/array-
               | api/latest/purpose_and_scope.htm...) is a step in the
               | right direction, but until that work is close to some
               | sort of beta version, data science folks will have tons
               | of type errors in their code, in spite of trying their
               | best.
        
             | forrestthewoods wrote:
             | Python's real strength is that it has a vast ecosystem of
             | uber powerful libraries written in C/C++.
        
               | pjmlp wrote:
               | Shared by any language with FFI capabilities.
        
               | PaulHoule wrote:
               | In theory. In practice people are very happy with what
               | happens when you                 import pandas
               | 
               | in Python, more so than competitors. I have been hoping
               | though that with the planned Java transition to FFI, you
               | can make Jython pass Python FFI through Java API to get
               | numpy and all that working.
        
               | pjmlp wrote:
               | More a side effect from teaching materials than anything
               | else, though.
               | 
               | Java still has the issue of the time Valhala is taking.
        
             | pjmlp wrote:
             | It is an ilusion that Python is like BASIC, in reality
             | Python 3.12 is rather complex, more like Common Lisp, when
             | taking into account all language breaking changes during
             | the last 30 years, its capabilities, the standard library,
             | and key libraries in the ecosystem.
        
               | PaulHoule wrote:
               | Almost every modern language with a well-specified
               | runtime looks a lot like Common Lisp because Common Lisp
               | was one of the first languages specified by adults and
               | the Common Lisp spec had a lot of influence on future
               | languages like Python and Java. For instance most
               | languages have a set of data types in the language or the
               | standard library, such as bignums, that are similar to
               | what CL has.
        
               | melodyogonna wrote:
               | What Python has in its locker is progressive display of
               | complexity
        
             | throwaway894345 wrote:
             | I wish this were true. I've used Python professionally for
             | more than a decade and I still don't consider myself an
             | expert (but I consider myself an expert in Go despite
             | having only used it professionally for a few years). A few
             | things off the top of my head that I still don't understand
             | expertly and yet they chafe me quite often: how imports are
             | resolved, how metaclasses work, how packaging works, etc.
             | 
             | And on the beginner end, even simple things like
             | "distributing a simple CLI program" or "running a simple
             | HTTP service" are complicated. In the former case you have
             | to make sure your target environment has the right version
             | of Python installed and the dependencies and the source
             | files (this can be mitigated with something like shiv or
             | better yet an OS package, but those are yet another thing
             | to understand). In the latter case you have to choose
             | between async (better take care not to call any sync I/O
             | anywhere in your endpoints!) or an external webserver like
             | uwsgi. With Go in both cases you just have to `go build`
             | and send the resulting static, native binary to your target
             | and you're good to go.
             | 
             | And in the middle of the experience spectrum, there's a
             | bunch of stuff like "how to make my program fast", or "how
             | do I ensure that my builds are reproducible", or "what
             | happens if I call a sync function in an async http
             | endpoint?". In particular, knowing why "just write the slow
             | parts in multiprocessing/C/Rust/Pandas" may make programs
             | _slower_. With Go, builds are reproducible by default,
             | naively written programs run about 2-3 orders of magnitude
             | faster than in Python, and you can optimize allocations and
             | use shared memory multithreading to parallelize (no need to
             | worry if marshaling costs are going to eat all of your
             | parallelism gains).
             | 
             | "Python is easy" has _never_ been true as far as I can
             | tell. It just looks easy in toy examples because it uses
             | `and` instead of `&&` and `or` instead of `||` and so on.
        
         | diffxx wrote:
         | > One of the interesting tradeoffs in programming languages is
         | compile speed vs everything else.
         | 
         | Yes, but I don't think that compile speed has really been
         | pushed aggressively enough to properly weigh this tradeoff. For
         | me, compilation speed is the #1 most important priority. Static
         | type checking is #2, significantly below #1 and everything else
         | I consider low priority.
         | 
         | Nothing breaks my flow like waiting for compilation. With a
         | sufficiently fast compiler (and Go is not fast enough for me),
         | you can run it on every keystroke and get realtime feedback on
         | your code. Now that I have had this experience for a while, I
         | have completely lost interest in any language that cannot
         | provide it no matter how nice their other features are.
        
           | lolinder wrote:
           | > Nothing breaks my flow like waiting for compilation. With a
           | sufficiently fast compiler (and Go is not fast enough for
           | me), you can run it on every keystroke and get realtime
           | feedback on your code.
           | 
           | I already get fast feedback on my code inlined in my editor,
           | and for most languages it only takes 1-2 seconds after I
           | finish typing to update (much longer for Rust, of course).
           | I've never personally found that those 1-2 seconds are a
           | barrier, since I type way faster than I can think anyway.
           | 
           | By the time I've finished typing and am ready to evaluate
           | what I've written, the error highlighting has already popped
           | up letting me know what's wrong.
        
             | TillE wrote:
             | Yeah even with a large C++ codebase, any decent IDE will
             | flag errors very quickly in real time. I dunno, I've never
             | found that waiting a minute to run a test or whatever is
             | particularly detrimental to my workflow.
             | 
             | I understand the benefits of super fast iteration if you're
             | tweaking a GUI layout or something, but for the most part
             | I'd prioritize many many other features first.
        
               | hu3 wrote:
               | Fast compilation + unit tests allows me to quickly check
               | if I broke a lot more stuff than IDE local checks.
        
           | tehnub wrote:
           | >With a sufficiently fast compiler (and Go is not fast enough
           | for me), you can run it on every keystroke and get realtime
           | feedback on your code.
           | 
           | What language are you using?
        
           | josephg wrote:
           | I think this is a false choice. It comes from the way we
           | design compilers today.
           | 
           | When you recompile your program, usually a tiny portion of
           | the lines of code have actually changed. So almost all the
           | work the compiler does is identical to the previous time it
           | compiled. But, we write compilers and linkers as batch
           | programs that redo all the compilation work from scratch
           | every time.
           | 
           | This is quite silly. Surely it's possible to make a compiler
           | that takes time proportional to how much of my code has
           | changed, not how large the program is in total. "Oh I see you
           | changed these 3 functions. We'll recompile them and patch the
           | binary by swapping those functions out with the new
           | versions." "Oh this struct layout changed - these 20 other
           | places need to be updated". But the whole rest of my program
           | is left as it was.
           | 
           | I don't mind if the binary is larger and less efficient while
           | developing, so long as I can later switch to release mode and
           | build the program for .. well, releasing. With a properly
           | incremental compiler, we should be able to compile small
           | changes into our software more or less instantly. Even in
           | complex languages like Rust.
        
             | estebank wrote:
             | > But, we write compilers and linkers as batch programs
             | that redo all the compilation work from scratch every time.
             | 
             | I don't think that there are that many production level
             | compilers that don't perform the kind of caching that
             | you're advocating for. Part of them problem is what the
             | language semantics are. https://www.pingcap.com/blog/rust-
             | huge-compilation-units/ gives an example of this.
             | 
             | > Surely it's possible to make a compiler that takes time
             | proportional to how much of my code has changed, not how
             | large the program is in total.
             | 
             | Language design also affects what can be done. For example
             | Rust relies a lot on monomorphisation, which in turn makes
             | much harder (not necessarily impossible) to do in-place
             | patching, but a language like Java or Swift, where a lot of
             | checks can be relegated to runtime, it becomesuch easier to
             | do that kind of patching.
             | 
             | I think that there's a lot left to be done to get closer to
             | what you want, but changing a compiler that has users in
             | such an extensive way is a bit like changing the engine of
             | a plane while it's flying.
        
               | josephg wrote:
               | Yes, rust has an incremental compilation mode that is
               | definitely faster than compiling the whole program from
               | scratch. But linking is still done from scratch every
               | time, and that gets pretty slow with big programs.
               | 
               | I agree that it would be a lot of work to retrofit llvm
               | like this. But personally I think that effort would be
               | well worth it. Maybe the place to start is the linker.
        
               | estebank wrote:
               | You're in good company: multiple people want to
               | experiment with taking ownership of the linking and
               | codegen steps to enable this kind of behavior. I would be
               | more than happy to see that happen. I feel that the
               | problem is a project of that magnitude requires a
               | benefactor for a small group of people to work completely
               | dedicated to it for maybe 2 years. Those don't come along
               | often. The alternative is that these projects don't
               | happen, lose steam or take a really long time.
        
             | samatman wrote:
             | Agreed. For example, Julia (which is a compiled language)
             | has a package called Revise, which provides incremental
             | compilation. A cold start on a package / project / script
             | will take awhile, and even when dependencies are
             | precompiled, but the code you're working on is not, REPL
             | startup takes noticeable amounts of time.
             | 
             | But once you have your REPL prompt, it's just: edit code,
             | test it. Revise figures out what needs recompiling and does
             | it for you. There are some limitations, most notably, any
             | redefinition of a struct requires a reboot, but it's a
             | great experience.
             | 
             | A lot of the current work going into the Zig compiler is to
             | greatly increase the compile time of debug builds, by
             | cutting the LLVM dependency, and then add incremental
             | compilation. I'm looking forward to the fruits of that
             | labor; I don't like to wait.
        
             | rotifer wrote:
             | > Surely it's possible to make a compiler that takes time
             | proportional to how much of my code has changed [...]
             | 
             | My understanding is that this is how Eclipse's Java
             | compiler works, but I'm not positive.
        
         | levodelellis wrote:
         | I'm the author of https://bolinlang.com/ Go is a magnitude
         | slower. I have some ideas why people think go is fast but
         | there's really no excuse for a compiler to be slower than it.
         | Even gcc is faster than go if you don't include too many
         | headers. Try compiling sqlite if you don't believe. Last time I
         | checked you could compile code in <100ms when using SDL3
         | headers (although not SDL2)
        
       | Sniffnoy wrote:
       | Wait, in Swift it's illegal to multiply an int by a double?? So
       | you would have to explicitly cast index to a double? I definitely
       | didn't expect that!
        
         | manmal wrote:
         | One could overload the * infix operator to support this type of
         | multiplication. It just doesn't come with the standard library.
        
         | jshier wrote:
         | No cast, just a conversion, depending on how you want the math
         | to work. int * double would usually be Double(int) * double.
        
         | wlesieutre wrote:
         | You could overload * if you really don't want to convert the
         | int manually
        
         | redwoolf wrote:
         | I think this is the correct way to handle this. I don't know
         | how many times I've been stymied by integer arithmetic and
         | precision loss by implicit conversion. How should this be
         | handled? Should the int be converted to a double before the
         | operation, should the double be converted to int before the
         | operation, or should the result be converted to an int or a
         | double? As someone who writes code in many languages in a day,
         | these implicit conversion rules can be difficult to remember.
         | It's best to enforce the developer to be explicit about the
         | intention.
        
         | jb1991 wrote:
         | Implicit conversions in any language are often a source of hard
         | to find bugs. So in general, I find most people agree with the
         | swift design that does not allow such conversions implicitly.
        
         | jayyhu wrote:
         | The alternative is to do what JS does:
         | https://www.destroyallsoftware.com/talks/wat
        
       | pajuc wrote:
       | It's really hard for me to read past Lattner's quote. "Beautiful
       | minimal syntax" vs "really bad compile times" and "awful error
       | messages".
       | 
       | I know it's not helpful to judge in hindsight, lots of smart
       | people, etc.
       | 
       | But why on earth would you make this decision for a language
       | aimed at app developers? How is this not a design failure?
       | 
       | If I read this article correctly, it would have been an
       | unacceptable decision to make users write
       | setThreatLevel(ThreatLevel.midnight) in order to have great
       | compile times and error messages.
       | 
       | Can someone shed some light on this to make it appear less
       | stupid? Because I'm sure there must be something less stupid
       | going on.
        
         | refulgentis wrote:
         | Here's some light to make it appear less stupid:
         | 
         | He doesn't claim its _not_ a design failure.
         | 
         | He doesn't say they sat down and said "You know what? Lets do
         | beautiful minimal syntax but have awful error messages & really
         | bad compile times"
         | 
         | The light here is recursive. As you lay out, it is _extremely_
         | stupid unlikely that choice was made, actively.
         | 
         | Left with an unlikely scenario, we take a step back and
         | question if we have any assumptions: and our assumption is they
         | made the choice actively.
        
           | jb1991 wrote:
           | The world is this even saying.
        
             | jerbear4328 wrote:
             | The designers of the language didn't intend for it to end
             | up this way, it just worked out like it did. GP is pointing
             | out that their parent assumed it was intentionally choosing
             | pretty syntax over speed, when it was more likely for them
             | to start with the syntax without considering speed.
        
               | pajuc wrote:
               | What's the difference between "choosing pretty syntax
               | over speed" and "start with syntax without considering
               | speed"?
        
               | janc_ wrote:
               | Intentions.
               | 
               | The first is a conscious decision, the second is not.
        
           | dmurray wrote:
           | The alternatives are even less charitable to the Swift
           | creators.
           | 
           | Surely, early in the development someone noticed compile
           | times were very slow for certain simple but realistic
           | examples. (Alternatives: they didn't have users? They didn't
           | provide a way to get their feedback? They didn't measure
           | compile times?)
           | 
           | Then, surely they sat down considered whether they could
           | improve compile times and at what cost, and determined that
           | any improvement would come at the cost of requiring more
           | explicit type annotations. (Alternatives: they couldn't do
           | the analysis the author did? The author is wrong? They found
           | other improvements, but never implemented them?)
           | 
           | Then, surely they made a decision that the philosophy of this
           | project is to prioritize other aspects of the developer
           | experience ahead of compile times, and memorialized that
           | somewhere. (Alternatives: they made the opposite decision,
           | but didn't act on it? They made that decision, but didn't
           | record it and left it to each future developer to infer?)
           | 
           | The only path here that reflects well on the Swift team
           | decision makers is the happy path. I mean, say what you like
           | about the tenets of Swift, dude, at least it's an ethos.
        
             | refulgentis wrote:
             | Quick note, a lot of the broader things you mention are
             | exactly the case, ex. prioritizing backwards compatibility
             | and ABI stability at all costs was a big kerfuffle around
             | Swift 3/4 and publicly documented. ex. limited use
             | initially
             | 
             | Broad note: there's something off with the approach, in
             | general. ex. we're not trying to find the interpretation
             | that's most favorable to them, just a likely one. Ex. It
             | assumes perfect future knowledge to allow objectively
             | correct decisions on sequencing at any point in the project
             | lifecycle. ex. Entirely possible they had automated testing
             | on this but it turns out the #s go deep red anytime anyone
             | adds operator overloading anyway in Apple-bunded
             | frameworks.
             | 
             | Simple note: As a burned out ex-bigco: Someone got wedded
             | to operator overriding and it was an attractive CS problem
             | where "I can fix it...or at least, I can fix it in enough
             | cases" was a silent thought in a lot of ICs heads
             | 
             | That's a guess, but somewhat informed in that this was
             | "fixed"/"addressed" and a recognized issue several years
             | ago, and I watched two big drives at it with two different
             | Apple people taking lead on patching/commenting on it
             | publicly
        
             | tolmasky wrote:
             | _> Alternatives: they didn 't have users?_
             | 
             | Correct, it is well known that they kept Swift a bizarre
             | secret internally. It seems no one thought it would be a
             | good idea to consult with the vast swathes of engineers
             | that had been using the language this was intended to
             | replace for the last 30 or so years, nor to consult with
             | the maintainers of the frameworks this language was
             | supposedly going to help write, etc. As you can imagine,
             | this led to many problems beyond just not getting a large
             | enough surface area of compiler performance use cases.
             | 
             | Of course, _after_ it was released, when they seemed very
             | willing to make backward-incompatible changes for 5 years,
             | and in theory they then had plenty of people running into
             | this, they apparently still decided to not prioritize it.
        
           | pajuc wrote:
           | So they didn't focus actively on good error messages and fast
           | compile times when designing a new language?
        
             | anoncareer0212 wrote:
             | If we _have_ to flatten it to  "they chose and knew exactly
             | what choice they were making", then there's no light to be
             | shed. Sure. That's stupid.
             | 
             | Its just as stupid to insist on that being the case.
             | 
             | If that's not convincing to you on its merits, consider
             | another aspect, you expressly were inviting conversation on
             | _why that wasn 't the case_
        
               | pajuc wrote:
               | Why is there no light to be shed?
               | 
               | This is a perfectly reasonable question to ask. And a
               | straight simple answer might be that no, they didn't. Or
               | not initially but later it was too late. Or here are the
               | circumstances in leadership, historical contexts that led
               | to it and we find those in other projects as well.
               | 
               | That would be interesting to hear.
        
               | refulgentis wrote:
               | I've kind of lost the plot myself. XD The whole concept
               | seems a bit complicated to me.
               | 
               | You're holding out on responding constructively until
               | someone on the Swift team responds?
               | 
               | Better to just avoid boorishness, or find another way to
               | word your invitation to the people who you will accept
               | discussion from.
               | 
               | I wouldn't go out of my way to engage in the comments
               | section with someone who calls me stupid repeatedly,
               | based on an imagined analysis of my thought process being
               | that of a small child, then refuses to engage with any
               | discussion that doesn't start with yes, my team was
               | stupid, we did actively choose awful error messages and
               | really bad compile times for pretty syntax.
        
         | jwells89 wrote:
         | I can't offer much in the way of reasoning or explanation, but
         | having written plenty of both Swift and Kotlin (the latter of
         | which being a lot like a more verbose/explicit Swift in terms
         | of syntax), I have to say that in the day to day I prefer the
         | Swift way. Not that it's a terrible imposition to have to type
         | out full enum names and such, but it feels notably more clunky
         | and less pleasant to write.
         | 
         | So maybe the decision comes down to not wanting to trade off
         | that smooth, "natural" feel when writing it.
        
         | pacaro wrote:
         | The author mentioned zig. And zig would get this right you can
         | just write `setThreatLevel(.midnight)`
         | 
         | But where zig breaks down is on any more complicated inference.
         | It's common to end up needing code like `@as(f32, 0)` because
         | zig just can't work it out.
         | 
         | In awkward cases you can have chains of several @ statements
         | just to keep the compiler in the loop of what type to use in a
         | statement
         | 
         | I like zig, but it has its own costs too
        
         | ChrisMarshallNY wrote:
         | _> aimed at app developers_
         | 
         | I'm a native Swift app developer, for Apple platforms, so I
         | assume that I'm the target audience.
         | 
         | Apps aren't major-league toolsets. My projects tend to be
         | fairly big, for apps, but the compile time is pretty much
         | irrelevant, to me. The linking and deployment times seem to be
         | bigger than the compile times, especially in debug mode, which
         | is where I spend most of my time.
         | 
         | When it comes time to ship, I just do an optimized archive, and
         | get myself a cup of coffee. It doesn't happen that often, and
         | is not unbearable.
         | 
         | If I was writing a full-fat server or toolset, with hundreds of
         | files, and tens of thousands of lines of code, I might have a
         | different outlook, but I really appreciate the language, so
         | it's worth it, for me.
         | 
         | Of course, I'm one of those oldtimers that used to have to
         | start the machine, by clocking in the bootloader, so there's
         | that...
        
           | josephg wrote:
           | I tried to fix a bug in Signal a few years ago. One part of
           | the code took so long to do type inference on my poor old
           | Intel MacBook that the Swift compiler errored out. I suppose
           | waiting was out of the question, and I needed a faster
           | computer to be able to compile the program.
           | 
           | That was pretty horrifying. I've never seen a compiler that
           | errors nondeterministically based on how fast your cpu is.
           | Whatever design choices in the compiler team led to that
           | moment were terrible.
        
             | ChrisMarshallNY wrote:
             | That sounds like Web sites, designed by designers with
             | massive monitors.
             | 
             | The tools team probably had ultra-fast Macs, and never
             | encountered that.
             | 
             | It definitely sounds like a bug in the toolset. I hope that
             | it was reported.
        
         | tolmasky wrote:
         | Honestly it follows the design of the rest of the language. An
         | incomplete list:
         | 
         | 1. They wrote it to replace C++ instead of Objective-C. This is
         | obvious from hearing Lattner speak, he always compares it to
         | C++. Which makes sense, he dealt with C++ every day, since he
         | is a compiler writer. This language does not actually address
         | the problems of Objective-C from a user-perspective. They
         | designed it to address the problems of C++ from a user-
         | perspective, and the problems of Objective-C from a _compiler
         | 's perspective_. The "Objective-C problems" they fixed were
         | things that made Objective-C annoying to optimize, not annoying
         | to write (except if you are a big hater of square brackets I
         | suppose).
         | 
         | 2. They designed the language in complete isolation, to the
         | point that most people at Apple heard of its existence the same
         | day as the rest of us. They gave Swift the iPad treatment.
         | Instead of leaning on the largest collection of Objective-C
         | experts and dogfooding this for things like ergonomics, they
         | just announced one day publicly that this was Apple's new
         | language. Then proceeded to make backwards-incompatible changes
         | for 5 years.
         | 
         | 3. They took the opposite approach of Objective-C, designing a
         | language around "abstract principles" vs. practical app
         | decisions. This meant that the second they actually started
         | working on a UI framework for Swift (the theoretical _point_ of
         | an Objective-C successor), _5 years after Swift was announced_
         | , they immediately had to add huge language features (view
         | builders), since the language was not actually designed for
         | this use case.
         | 
         | 4. They ignored the existing community's culture (dynamic
         | dispatch, focus on frameworks vs. language features, etc.) and
         | just said "we are a type obsessed community now". You could
         | tell a year in that the conversation had shifted from how to
         | make interesting animations to how to make JSON parsers type-
         | check correctly. In the process they created a situation where
         | they spent years working on silly things like renaming all the
         | Foundation framework methods to be more "Swifty" instead of...
         | 
         | 5. Actually addressing the clearly lacking parts of Objective-C
         | with simple iterative improvements which could have
         | dramatically simplified and improved AppKit and UIKit. 9 years
         | ago I was wishing they'd just add async/await to ObjC so that
         | we could get modern async versions of animation functions in
         | AppKit and UIKit instead of the incredibly error-prone chained
         | didFinish:completionHandler: versions of animation methods.
         | Instead, this was delayed until 2021 while we futzed about with
         | half a dozen other academic concerns. The _vast_ majority of
         | bugs I find in apps from a user perspective are from improper
         | reasoning about async /await, not null dereferences. Instead
         | the entire ecosystem was changed to prevent nil from existing
         | and under the false promise of some sort of incredible
         | performance enhancement, despite the fact that all the
         | frameworks were still written in ObjC, so even if your entire
         | app was written in Swift it wouldn't really make that much of a
         | difference in your performance.
         | 
         | 6. They were initially obsessed with "taking over the world"
         | instead of being a great replacement for the actual language
         | they were replacing. You can see this from the early marketing
         | and interviews. They literally billed it as "everything from
         | scripting to systems programming," which generally speaking
         | should always be a red flag, but makes a lot of sense given
         | that the authors did not have a lot of experience with anything
         | other than systems programming and thus figured "everything
         | else" was probably simple. This is not an assumption, he even
         | mentions in his ATP interview that he believes that once they
         | added string interpolation they'd probably convert the "script
         | writers".
         | 
         | The list goes on and on. The reality is that this was a failure
         | in management, not language design though. The restraint should
         | have come from above, a clear mission statement of what the
         | point of this huge time-sink of a transition was _for_. Instead
         | there was some vague general notion that  "our ecosystem is
         | old", and then zero responsibility or care was taken under the
         | understanding that you are more or less going to _force people
         | to switch_. This isn 't some open source group releasing a new
         | language and it competing fairly in the market (like, say, Rust
         | for example). No, this was the platform vendor declaring this
         | is the future, which IMO raises the bar on the care that should
         | be taken.
         | 
         | I suppose the ironic thing is that the vast majority of apps
         | are just written in UnityScript or C++ or whatever, since most
         | the AppStore is actually games and not utility apps written in
         | the official platform language/frameworks, so perhaps at the
         | end of the day ObjC vs. Swift doesn't even matter.
        
           | samatman wrote:
           | This is a great comment, you clearly know what you're talking
           | about and I learned a lot.
           | 
           | I wanted to push back on this a bit:
           | 
           | > _The "Objective-C problems" they fixed were things that
           | made Objective-C annoying to optimize, not annoying to write
           | (except if you are a big hater of square brackets I
           | suppose)._
           | 
           | From an outsider's perspective, this was the point of Swift:
           | Objective C was and is hard to optimize. Optimal code means
           | programs which do more and drain your battery less. That was
           | Swift's pitch: the old Apple inherited Objective C from NExT,
           | and built the Mac around it, back when a Mac was plugged into
           | the wall and burning 500 watts to browse the Internet. The
           | new Apple's priority was a language which wasn't such a hog,
           | for computers that fit in your pocket.
           | 
           | Do you think it would have been possible to keep the good
           | dynamic Smalltalk parts of Objective C, and also make a
           | language which is more efficient? For that matter, do you
           | think that Swift even succeeded in being that more efficient
           | language?
        
       | tantalor wrote:
       | > they're invalid swift
       | 
       | If this isn't valid why are we even taking about it? The compiler
       | should report syntax error or something
        
         | redwoolf wrote:
         | That's the point. It's invalid swift syntax, but it takes >40
         | seconds to tell you that.
        
         | Jtsummers wrote:
         | The point is that it's valid syntax (invalid _syntax_ is found
         | in an earlier phase of compilation and would report much
         | faster). It 's invalid in Swift's type system, and it takes it
         | 42 seconds (in the string example) and 8 seconds (in the math
         | expression one) for it to tell you that it can't type-check it
         | in a reasonable time and then it quits.
        
           | tantalor wrote:
           | "Reasonable time" is subjective. What happens if you don't
           | limit the time?
           | 
           | Guesses:
           | 
           | 1. Successfully compiles
           | 
           | 2. Reports an error
           | 
           | 3. Never halts
           | 
           | 4. Nobody knows
        
         | Dylan16807 wrote:
         | If there was just the right overload hiding somewhere, it
         | wouldn't be an error and it would take a similar amount of
         | time. This is just the easiest way to show off the problem,
         | which is how long it takes to check.
        
       | vlovich123 wrote:
       | The combinatorial explosion is intractable but since it only
       | seems to come up in really obscure corner cases, I wonder if the
       | typical inference scenarios can be solved by having the compiler
       | cache the AST across invocations so that inference only needs to
       | be performed on invalidated parts of the AST as it's being typed
       | instead of waiting for the user to invoke the compiler.
        
         | jerf wrote:
         | I think it would also be interesting for the compiler to modify
         | the source code (with a flag, presumably, or possibly through
         | the local LSP server) with the solved result as needed. This
         | would also feedback to the programmer what the problem is, what
         | the compiler thought the solution is (and under the
         | circumstances, while it can't be "wrong" there's a decent
         | chance it is suboptimal or not what the programmer expected),
         | and not require future caching.
         | 
         | I kind of feel like that for more advanced languages this sort
         | of back-and-forth between the type checker and the language has
         | some potential to it.
        
           | irq-1 wrote:
           | Why not make it a warning if the compiler leaves the happy
           | path? The fix is easy for the programmer (enums, casting,
           | etc.) and could be left unfixed if the programmer knows its
           | OK (meaning it wont take 42 seconds and give an error.) The
           | LSP would have to know when the compiler ran into this, but
           | if the compiler can identify the type explosion problem, it
           | can tell the LSP.
        
         | JackYoustra wrote:
         | I don't think it comes up in obscure corner cases now with
         | SwiftUI's builder types being concrete, especially with
         | overloaded callbacks like in ForEach
         | 
         | Edit: I just remembered my favorite one: I had a view where the
         | compile times doubled with every alert I added to it.
        
         | fweimer wrote:
         | I think the challenge here is that it only happens for a type
         | error, not a successful compilation. Each time this happens,
         | the programmer would try to fix the type error, usually
         | invalidating the cache.
         | 
         | I'm not so sure the problem is intractable because it's so
         | well-structured. Someone would have to look at it and check
         | that there aren't any low-hanging fruits. The challenge might
         | be that anyone who could fix this could make much more
         | impactful contributions to the compiler. But it's hard to know
         | without trying.
        
       | tinganho wrote:
       | Isn't the channel variable declared and inferred as an int32?
       | Can't see why the overload isn't resolved directly?
        
         | extrabajs wrote:
         | The problem isn't that the type inference can't figure out that
         | it's a number (it can). Subtyping makes inference difficult.
         | There may be a function somewhere which takes arguments that
         | could be made to accept a string and an int32 (or whatever
         | other number type that literal could be).
        
       | withoutboats3 wrote:
       | I'm suspicious about the truth of this claim. I don't think
       | bidirectional typechecking is the problem in itself: the problem
       | trying to do type inference when you have some extremely flexible
       | operator overloading, subtyping, and literal syntax features.
       | It's these "expressive" features which have made type inference
       | in Swift far more computationally complex, not type inference
       | itself.
       | 
       | And certainly not _bidirectional type inference_ ; the author of
       | this post's definition of this concept isn't even right
       | (bidirectional typing refers to having a distinction between
       | typing judgements which are used to infer types and those which
       | are used to restrict types, not moving bidirectionally between
       | parent nodes & child nodes). I don't know if the mistake comes
       | from the post author or Chris Lattner, and I don't know if the
       | word "bidirectional" is relevant to Swift's typing; I don't know
       | if Swift has a formal description of its type system or that
       | formal description is bidirectional or not.
       | 
       | EDIT: watching the video the Chris Lattner quote comes from, it
       | appears the mistake about the word "bidirectional" is his.
       | Bidirectional type systems are an improvement over ordinary type
       | systems in exactly the direction he desires: they distinguish
       | which direction typing judgements can be used (to infer or check
       | types), whereas normal formal descriptions of type systems don't
       | make this distinction, causing the problems he describes. "Bottom
       | up" type checking is just a specific pattern of bidirectional
       | type checking.
       | 
       | Regardless, the problem with Swift is that a literal can have any
       | of an unbound arity of types which implement a certain protocol,
       | all of which have supertypes and subtypes, and the set of
       | possibilities grows combinatorially because of these language
       | features.
       | 
       | cf: https://arxiv.org/abs/1908.05839
        
         | lamontcg wrote:
         | I'd trade type inference for expressibility (function and
         | operator overloading in particular). But this seems to
         | currently be a heretical opinion.
        
           | Tuna-Fish wrote:
           | Rust took the other road:                   error[E0277]:
           | cannot add `i64` to `i32`             1i32 + 2i64;
           | ^ no implementation for `i32 + i64`
           | 
           | It bothered me at first, there are a lot of explicit
           | annotations for conversions when dealing with mixed precision
           | stuff. But I now feel that it was the exactly correct choice,
           | and not just because it makes inference easier.
        
             | lamontcg wrote:
             | Rust still lets you create your own algorithmic data types
             | so that you can overload traits like Add, Mul, Div and Sub
             | to implement your own e.g. Vector3 class (although I still
             | feel this is too restrictive, but understand why rust
             | chooses not to do that). What Rust blocks in your example
             | there is implicit type conversion/coercion/promotion, and
             | I'm actually okay with that and forcing being much more
             | explicit about type casting.
             | 
             | Go is the language that makes your vector classes required
             | to have syntax like v1.VecMult(v2) and v1.ScalarMult(s)
             | because there's no operator overloading at all (even though
             | there's a largely useless baked-in complex number class).
        
               | adamwk wrote:
               | the rust traits are not overloading though (at least not
               | in the same way as swift), that's just implementing a
               | trait on a type. I think the distinction is that with
               | traits or protocols you get a lot of expressivity,
               | reducing the need for overloading
        
         | pcwalton wrote:
         | Yeah, I agree. Regular H-M typechecking if you don't have
         | subtyping is actually really fast. The problem is that
         | subtyping makes it really complicated--in fact, IIRC, formally
         | undecidable.
        
       | putzdown wrote:
       | It looks to me as if there's a solution to this problem based on
       | the precompilation of sparse matrices. I'll explain. If you have
       | a function (or operator) call of the form fn(a, b), and you know
       | that fn might accept 19 types (say) in the "a" place and 57 types
       | in the "b" place, then in effect you have a large 2d matrix of
       | the a types and the b types. (For functions taking a larger
       | number of arguments you have a matrix with larger
       | dimensionality.) The compiler's problem is to find the matrix
       | cell (indeed the first cell by some ordering) that is non-empty.
       | If all the cells are empty, then you have a compiler error. If at
       | least one cell is non-empty (the function is implemented for this
       | type combination), then you ask "downward" whether the given
       | arguments values can conform to the acceptable types. I know that
       | there's complexity in this "downward" search, but I'm guessing
       | that the bulk of the time is spent on searching this large
       | matrix. If so, then it's worth noting that there are good ways of
       | making this kind of sparse matrix search very fast, almost
       | constant time.
        
       | ebri wrote:
       | All those Swiftie haters
        
       | erichocean wrote:
       | There are fast type inference algorithms available today, such as
       | MLStruct. [0]
       | 
       | [0] https://github.com/hkust-taco/mlstruct
        
       | ashdnazg wrote:
       | Our CI posts a list of shame with the 10 worst offending
       | expressions on every PR as part of the build and test results.
       | 
       | So far it's working quite nicely. Every now and then you take a
       | look and notice that your modules are now at the top, so you
       | quickly fix them, passing the honour to the next victim.
        
       | mrkeen wrote:
       | HM works great for me. Let's try it elsewhere instead of blaming
       | the algorithm!                 {-# LANGUAGE OverloadedStrings #-}
       | -- Let strings turn into any type defining IsString       {-#
       | LANGUAGE GeneralizedNewtypeDeriving #-} -- simplify/automate
       | defining IsString             import Data.String (IsString)
       | main = do              -- Each of these expressions might be a
       | String or one of the 30 Foo types below         let address  =
       | "127.0.0.1"         let username = "steve"         let password =
       | "1234"         let channel  = "11"              let url =
       | "http://" <> username                              <> ":"      <>
       | password                              <> "@"      <> address
       | <> "/api/"  <> channel                              <> "/picture"
       | print url            newtype Foo01 = Foo01 String deriving
       | (IsString, Show, Semigroup)       newtype Foo02 = Foo02 String
       | deriving (IsString, Show, Semigroup)       -- ... eliding 27
       | other type definitions for the comment       newtype Foo30 =
       | Foo30 String deriving (IsString, Show, Semigroup)
       | 
       | Do we think I've captured the combinatorics well enough?
       | 
       | The _url_ expression is 9 adjoining expressions, where each
       | expression (and pair of expressions, and triplet of expressions
       | ...) could be 1 of at least 31 types.
       | 
       | $ ghc --version                 The Glorious Glasgow Haskell
       | Compilation System, version 9.0.2
       | 
       | $ time ghc -fforce-recomp foo.hs                 [1 of 1]
       | Compiling Main             ( foo.hs, foo.o )       Linking foo
       | ...            real    0m0.544s       user    0m0.418s       sys
       | 0m0.118s
       | 
       | Feels more sluggish than usual, but bad combinatorics shouldn't
       | just make it _slightly_ slower.
       | 
       | I tried compiling the simplest possible program and that took
       | `real 0m0.332s` so who knows what's going on with my setup...
        
         | ash_gti wrote:
         | Your example is different than the example in the post.
         | 
         | Specifically, `channel = 11`, an integer.
         | 
         | If it was a string then it parses very quickly.
        
           | mrkeen wrote:
           | If what your saying is true (the type is fixed as an
           | integer), then it's even easier in tfa's case. No inference
           | necessary.
           | 
           | In my code _channel_ is not a string, it 's _one type_ of the
           | 31-set of (String, Foo01, Foo02, .., Foo30). So it needs to
           | be inferred via HM.
           | 
           | > If it was a string then it parses very quickly.
           | 
           | "Parses"? I don't think that's the issue. Did you try it?
           | 
           | ----- EDIT ------
           | 
           | I made it an Int                 let channel = 11 :: Int
           | instance IsString Int where         fromString = undefined
           | instance Semigroup Int where         (<>) = undefined
           | real    0m0.543s       user    0m0.396s       sys
           | 0m0.148s
        
             | ash_gti wrote:
             | The type is an inferred integer literal in swift (in the
             | swift standard this is the `ExpressibleByIntegerLiteral`,
             | the string literals are `ExpressibleByStringLiteral`
             | types).
             | 
             | The reason this causes issues with the type checker is it
             | has to consider all the possible combinations of the `+`
             | operator against all the possible types that can be
             | represented by an inferred integer literal.
             | 
             | This is whats causing the type checker to try every
             | possible combination of types implementing the `+`
             | operator, types implementing `ExpressibleByIntegerLiteral`
             | and `ExpressibleByStringLiteral` in the standard library.
             | That combination produces 59k+ permutations without even
             | looking at non-standard library types.
             | 
             | If any of the types in the expression had an explicit type
             | then it would be type checked basically instantly. Its the
             | fact that none of the values in the expression have
             | explicit types that is causing the type checker to consider
             | so many different combinations.
        
               | mrkeen wrote:
               | > The reason this causes issues with the type checker is
               | it has to consider all the possible combinations of the
               | `+` operator against all the possible types that can be
               | represented by an inferred integer literal.
               | 
               | Can you please go back and read what I wrote and come up
               | with _any plausible alternative explanation_ for why I
               | wrote the code that I wrote, if _not_ to overload the HM
               | with too many possible types to infer.
               | 
               | > If any of the types in the expression had an explicit
               | type then it would be type checked basically instantly.
               | 
               | Did you try this?
               | 
               | > Its the fact that none of the values in the expression
               | have explicit types that is causing the type checker to
               | consider so many different combinations.
               | 
               | That's what I wrote in my first version. No explicit
               | types. Then I got some comment about it needing to be an
               | Int.
               | 
               | > That combination produces 59k+ permutations without
               | even looking at non-standard library types.
               | 
               | Mine should reject 26439622160640 invalid typings to land
               | on one of 31 possible well-typed readings of this
               | program. (31 ^ 9) - 31.
        
               | ash_gti wrote:
               | The haskell source has `let channel = "11"` vs `let
               | channel = 11`. The example from the post is an example
               | that looks like it should be pretty straight forward but
               | the swift compiler falls over when you try it.
               | 
               | Trying it locally for example:                 # Original
               | example       $ time swiftc -typecheck - <<-HERE
               | let address = "127.0.0.1"       let username = "steve"
               | let password = "1234"       let channel = 11
               | let url = "http://" + username                   + ":" +
               | password                   + "@" + address
               | + "/api/" + channel                           +
               | "/picture"              print(url)              HERE
               | <stdin>:6:5: error: the compiler is unable to type-check
               | this expression in reasonable time; try breaking up the
               | expression into distinct sub-expressions        4 | let
               | channel = 11        5 |         6 | let url = "http://" +
               | username           |     `- error: the compiler is unable
               | to type-check this expression in reasonable time; try
               | breaking up the expression into distinct sub-expressions
               | 7 |             + ":" + password         8 |
               | + "@" + address        swiftc -typecheck - <<<''  36.38s
               | user 1.40s system 96% cpu 39.154 total              #
               | Adding a type to one of the expression values       $
               | time swiftc -typecheck - <<-HERE       let address =
               | "127.0.0.1"       let username = "steve"       let
               | password = "1234"       let channel = 11              let
               | url = "http://" + username                   + ":" +
               | password                   + "@" + address
               | + "/api/" + String(channel)                   +
               | "/picture"            print(url)              HERE
               | swiftc -typecheck - <<<''  0.11s user 0.03s system 74%
               | cpu 0.192 total
               | 
               | Which is roughly the in line with the numbers in the
               | original post.
        
       | pshirshov wrote:
       | Scala does essentially the same and a lot faster, so it's not a
       | fundamental limitation.
        
       | w10-1 wrote:
       | This article doesn't even mention the new type checker and
       | constraint solver.
       | 
       | The compiler is open-source, and discussed on open forums.
       | Readers would love some summary/investigation into slow-down
       | causes and prospects for fixes.
        
       | dmurray wrote:
       | > The issue is caused by using the + operator with the channel
       | Int and a String literal. Thanks to the standard library's 17
       | overloads of + and 9 types adopting the
       | ExpressibleByStringLiteral Protocol, the swift compiler can't
       | rule out that there might be a combination of types and operators
       | that make the expression valid, so it has to try them all. Just
       | considering that the five string literals could be one of the
       | possible nine types results in 59,049 combinations, but I suspect
       | that's a lower bound, since it doesn't consider the many
       | overloads of +.
       | 
       | This really seems like a design flaw. If there are 59,049
       | overloads for string concatenation, surely either
       | 
       | - one of them should be expressive enough to allow concatenation
       | with an integer, which we can do after all in some other
       | languages
       | 
       | - or, the type system should have some way to rule out that no
       | type reachable by concatenating subtypes of String can ever get
       | concatenated to an integer.
       | 
       | Is this unreasonable? Probably there's some theorem about why I'm
       | wrong.
        
       ___________________________________________________________________
       (page generated 2024-06-12 23:02 UTC)