[HN Gopher] Which programming language or compiler is faster
       ___________________________________________________________________
        
       Which programming language or compiler is faster
        
       Author : wglb
       Score  : 57 points
       Date   : 2021-12-18 02:15 UTC (20 hours ago)
        
 (HTM) web link (programming-language-benchmarks.vercel.app)
 (TXT) w3m dump (programming-language-benchmarks.vercel.app)
        
       | silvanocerza wrote:
       | The faster programming language or compiler is the one you use.
       | Always.
        
         | nobleach wrote:
         | ...I wish this were true. I'm back writing
         | JavaScript/Typescript. The new build tools are great. Perhaps
         | in the new year I'll convert some of these repos to build with
         | esbuild or swc... or vite.
        
         | nouveaux wrote:
         | No matter how good you are at Python, if you need to build an
         | SPA, it'll be far faster to go with Javascript. This is true
         | even if you have to learn Javascript from scratch. If you're an
         | expert at Pascal, good luck trying to use it on the Arduino.
         | 
         | The hardest part of learning a second and third programming
         | language is overcoming your own frustration. Yes there are some
         | pretty gnarly concepts in languages like Haskell. However, most
         | mainstream programming languages are very similar to each
         | other.
        
           | benibela wrote:
           | >If you're an expert at Pascal, good luck trying to use it on
           | the Arduino.
           | 
           | It is supposed to work with FreePascal:
           | https://wiki.freepascal.org/AVR_Programming
        
         | keithnz wrote:
         | This is almost never the criteria people use as it is often not
         | that important. Productivity is often a bigger factor. Security
         | and Robustness is another factor. Libraries is another,
         | portability, Platform limitations, Domain problem to be solved.
         | Most software these days isn't really constrained by speed, if
         | the software ran 10x faster, you'd only get marginal gains.
        
       | Narishma wrote:
       | Why link to that particular problem and not one of the dozen
       | other in that page?
        
       | inshadows wrote:
       | There used to be
       | 
       | http://shootout.alioth.debian.org
       | 
       | and it crashed and data was lost AFAR.
       | 
       | There's now
       | 
       | https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
       | 
       | https://en.wikipedia.org/wiki/The_Computer_Language_Benchmar...
       | 
       | Just wanted to mention prior art.
        
       | superjan wrote:
       | This is a really strange benchmark to use. Compiler speed is not
       | an issue for these 100ish line standalone benchmarks. The
       | question is how fast it is for a million-line project or worse. I
       | doubt c or c++ do good there with their header file includes for
       | dependencies.
        
         | subb wrote:
         | On a single core processor, obviously not, but massive C++
         | codebase are compiled with multicore CPU, and often
         | distributed. The advantage of the header inclusion model is you
         | get an bunch of independent compilation units, and so it's
         | straight forward to parallelize.
         | 
         | A common way to speed things up even more is to merge some cpp
         | file together, so each compiler thread is active for longer and
         | common headers are compiled once.
         | 
         | With the new module feature of C++20, compilation is
         | transformed into a DAG, and it is still unclear if it's really
         | faster than just throwing a lot of cores at the problem.
        
           | unnouinceput wrote:
           | You show me that IDE for C++ who is using those multicore
           | CPU's for compilation who's faster than Delphi, for same
           | number of lines and/or external dependencies. Btw, Delphi is
           | still using 32 bits for its IDE and a single core and still
           | blows out of the water anything else in terms of millions of
           | lines of code compilation time.
        
             | jcelerier wrote:
             | I wondered if Delphi supported LTO (as that's one of the
             | main compile-time-eaters) and the only examples I could
             | find of sentences containing both "Delphi" and "LTO"
             | referred to Delphi support for TAPE DRIVES
             | (https://wikipedia.org/wiki/Linear_Tape-Open). Of course
             | prehistoric tech is faster, running a 1995 game on a i7
             | would also be fast. What matters is, is Delphi able to
             | match C++ in run-time benchmarks ?
             | 
             | Also, here's a video of my edit-compile-run cycle ; cloc on
             | that project gives me 506963 loc without counting
             | dependencies. The software uses boost and fairly modern C++
             | with no particular feature restriction and templates going
             | wild. It's not instant, but still, 2 seconds is somewhat
             | tolerable (and definitely faster than various web stuff
             | I've been involved with)
             | 
             | https://streamable.com/az397y
        
         | keithnz wrote:
         | I think the title might be misleading you, when it mentions
         | compiler speed, it means, for the same language there are
         | different compilers. It's about runtime speed of different
         | languages and different compilers for those languages
        
         | celeritascelery wrote:
         | I am sure if someone contributed a million line project for
         | every language they would be happy to benchmark it.
        
         | piinbinary wrote:
         | I believe this is measuring how fast the binary produced by
         | different compilers is, rather than how long the compilation
         | takes.
        
       | sigjuice wrote:
       | What does it even mean for a language to be faster?
        
         | fhd2 wrote:
         | Not sure how technically accurate it is, but "language" usually
         | implies more than just the syntax. I hear and say things like
         | "compiled language", "interpreted language", "language VM",
         | "garbage collected language", "dynamically typed language" and
         | so on. Those things all say something about development speed
         | and runtime speed.
        
           | sigjuice wrote:
           | The phrases "compiled language" and "interpreted language",
           | despite their widespread use, are not particularly sound
           | concepts. There are plenty of languages that have both
           | compilers and interpreters.
        
         | coldtea wrote:
         | A lot. Semantics that can be made fast (e.g. C vs Python) with
         | little effort.
         | 
         | (Some languages with bad semantics for that might still get
         | fast, but the language itself isn't designed to help that. JS
         | for example was made fast, but with tons of effort but with
         | tons of money and man-centuries thrown in by Google, Apple,
         | Mozilla, MS, not the kind of recources available to most
         | languages. So, I'd call a language whose semantics afford
         | optimization a fast language).
         | 
         | Runtime complexity (e.g. "zero overhead" vs "lotsa overhead"
         | features).
         | 
         | Also, I find the "language vs compiler" dicthotomy pedantic and
         | tedious. For most languages and most use cases, there's a
         | single dominant canonical version, e.g. CPython, Matz's Ruby,
         | Oracle's Java.
         | 
         | Just because there are niche other implementations all 10
         | people use doesn't change the fact that 90% of users will be on
         | the canonical implementation, and for them, THAT's synonymous
         | with the language, not just some abstract spec.
        
         | mottosso wrote:
         | That the same logical steps expressed in one language results
         | in less computation time spent - when compiled or interpreted -
         | than when expressed in another language. Seems a resaonable
         | question to ask, given there's more to a language than how you
         | express things. Like toolchain, library availability and
         | community.
        
         | User23 wrote:
         | Strictly speaking it's meaningless, since a language is a set
         | of syntax and semantics. Pragmatically though, when people talk
         | about the speed of a language they actually mean the typical
         | performance of processes defined by programs written in that
         | language, which of course depends on the code generator and
         | runtime, among other things.
         | 
         | There's nothing stopping anyone from writing a fully standards
         | compliant C interpreter that will have considerably slower
         | process execution after all. In fact, a quick search shows that
         | it's been done many times, for example this[1].
         | 
         | [1] https://github.com/jpoirier/picoc
        
         | retrac wrote:
         | Same idea as comparing different compilers for the same
         | language. Some algorithms can be expressed in some languages in
         | an essentially identical way. For example, various sorting
         | algorithms look pretty much the same in Fortran, Pascal and C.
         | Only surface syntax differs. Assuming this is done fairly with
         | each implementation being properly equivalent, it's a way to
         | compare the speed of the languages, at least for that
         | algorithm.
        
       | nurettin wrote:
       | I wonder where FPC stands. Original Pascal implementations were
       | single pass and I've had Mloc delphi projects build in seconds.
        
         | cmroanirgo wrote:
         | I came here to ask the same. Pascal files are laid out in such
         | a manner that the "implementation" section of the file is ready
         | for compilation to bytes and any forward references just mean
         | that it's code ends up later in the exe. Typical runtime speed
         | has only ever been fractionally slower than c/c++.
        
       | peanut_worm wrote:
       | Julia always seems to rank #1 or close to it when compared to
       | other interpreted languages but I only ever see it used for
       | research and math. Why doesn't anyone use it for anything else?
        
         | harles wrote:
         | Julia is incredibly slow to compile. Far worse than C++ last
         | time I tried. For interactive notebooks, I found it's common to
         | have a single line take a couple minutes to compile - it's
         | simply not interactive in the way Python is, despite having
         | good runtime performance once compiled.
         | 
         | The deal breaker for me is that its compiler is JIT based
         | whether you want it or not. Code will hang at runtime for
         | multiple minutes as it compiles all the library dependencies
         | and specializes them to the given data types.
        
           | adgjlsfhk1 wrote:
           | Do you have an example for minutes of compile time for a
           | single line? Especially since 1.6 (march 2021), I haven't run
           | into that (and I'm using Julia pretty much full-time).
        
             | hutzlibu wrote:
             | So the compile times gotten better, but still are quite
             | slow in comparison?
        
               | adgjlsfhk1 wrote:
               | In comparison to what? I've found that it's very
               | responsive in a repl/notebook for interactive work
               | (especially with Revise). It's not good for running short
               | scripts, but if you have something that's doing some real
               | work, the compilation often isn't a big deal.
        
               | fault1 wrote:
               | Personally, I think interactive usage has improved quite
               | a bit. I don't "feel" it that badly in 1.7. It was bad
               | enough earlier in the 1.0 series that I stopped using it
               | for a while (I used it before <1.0 as well).
               | 
               | I think the problems were from:
               | 
               | 1) llvm itself getting dramatically slower over time,
               | before more focus upstream on compiler speed more
               | recently: https://www.npopov.com/2020/05/10/Make-LLVM-
               | fast-again.html
               | 
               | 2) different types of invalidations
               | 
               | 3) less than ideal precompilation of packages
               | 
               | I think all of it has been improved, and can certainly be
               | improved further. That also includes improving the speed
               | of its interpreter (vs the JIT), which exists but was
               | never optimized for speed afaik.
        
           | ElectronCharge wrote:
           | This is total misinformation, sorry. Julia may, depending on
           | your setup, be slow to initially load, but the compiler is
           | quite fast generally.
           | 
           | Also, there's a solution to precompile binaries with no JIT
           | penalty...
           | 
           | https://github.com/JuliaLang/PackageCompiler.jl
           | 
           | Enjoy!
        
         | Narishma wrote:
         | Its ridiculously large memory usage might be a reason.
        
         | open-source-ux wrote:
         | During it's development Julia was heavily promoted as a
         | language for scientific computing.
         | 
         | Nowadays, the language is promoted as a general purpose
         | language suitable for any task or domain.
         | 
         | However, the perception that Julia is primarily for scientific
         | computing remains strong among many developers. Only time will
         | tell if that perception changes.
        
         | Hasnep wrote:
         | In addition to the other replies, the comparatively long
         | startup time makes it unsuitable for short scripts, and with a
         | few dependencies it becomes unsuitable for even medium length
         | programs. The startup time is always coming down and there are
         | ways to compile dependencies ahead of time though. The JIT
         | compilation can be annoying for real-time applications too.
        
         | _dain_ wrote:
         | I think founder effects. It started in the math and science
         | community so that's where most of the library-writing energy
         | has gone into.
        
         | ludamad wrote:
         | I think it competes with R, which has a similar usage profile
        
         | g42gregory wrote:
         | Lack of integration with other massive libraries that Python,
         | Java and C++ have. Lack of books, tutorials and extensive
         | knowledge bases that Python, Java and C++ have.
        
           | NeutralForest wrote:
           | There's a big effort to make that happen though, with Julia
           | interop https://github.com/JuliaInterop
        
           | adgjlsfhk1 wrote:
           | Java doesn't integrate well with Julia, but Julia's interop
           | with C++ and python are both pretty painless. pycall in
           | particular is about as easy as you could imagine.
        
         | fault1 wrote:
         | These are what i'd say, as someone who has tinkered with it a
         | lot but uses python/C++ for $day_job:
         | 
         | - immature web/network services stack. For better or worse the
         | lang seemed to focus on more number crunching throughput early
         | than reliable distributed data processing, but the latter is a
         | much larger market and what most enterprises care about.
         | 
         | It's the difference between a lang only having Matlab or R
         | ceiling vs python-like ceiling.
         | 
         | - poor static compilation or static analysis story. this is
         | what a lot of people needing to put code in production need.
         | 
         | It's also standard expectation now with a lot of dynamic
         | language code (python or js) that aims to be in production.
         | 
         | - for certain types of realtime uses, more direct control of
         | memory is doable, but you have to jump through hoops. Again,
         | determinism is very important.
         | 
         | - historically, compile times were quite horrid. part of this
         | was llvm, part of it was julia. It's gotten way better in 1.6
         | and 1.7, but compilation results need to be cached more than
         | just precompilation.
         | 
         | however, all being said, I love the lang! There is a lot going
         | for it especially how the package ecosystem works. There is
         | also a lot of innovative projects.
        
           | 77pt77 wrote:
           | > poor static compilation or static analysis
           | 
           | You're comparing it with python.
        
             | fault1 wrote:
             | yes, and python has a large amount of analyzers:
             | 
             | https://github.com/typeddjango/awesome-python-typing
             | 
             | https://github.com/ethanhs/python-typecheckers
             | 
             | though i'd say they are worse than similar tooling in
             | typescript/javascript.
             | 
             | julia's tooling is relatively nascent.
        
               | adgjlsfhk1 wrote:
               | Julia has @code_warntype for type checking and Jet.jl for
               | static analysis (and snoopcompile/cthulu for the really
               | advanced stuff). Is there anything those are missing?
        
       | [deleted]
        
       | bernardv wrote:
       | Those are two very different questions.
        
       | btdmaster wrote:
       | I wonder if guile[1][2] would be competitive to the other Lisp
       | (Scheme) implementations. It was significantly faster than Python
       | at numerical tasks in my experience.
       | 
       | [1] https://www.gnu.org/software/guile/
       | 
       | [2] https://web-artanis.com/scheme.html
        
       | Jeff_Brown wrote:
       | What matters to me is incremental build time. Rarely do I compule
       | a big program in its entirety from scratch.
       | 
       | I don't use Unison but I would love to for this reason.
        
         | elcritch wrote:
         | The title is a bit misleading. Its not clear if the compiler
         | times are included, but the numbers do include the benchmark.
         | Thats why the exact same sbcl compiler has different times.
         | 
         | The benchmarks are mixed with some having simd and some using
         | threads. It'd be helpful to be able to sort submissions by
         | threaded/unthreaded as well.
         | 
         | Still kinda fun to read the different languages. I found Zig
         | was very chatty and annoying to read even compared to the C
         | versions. The common lisp was foreign. The Fortran 90 with
         | openmp and Julia versions were pleasant to read. The Nim
         | version was easy to read but lacks threading or simd.
        
           | [deleted]
        
         | anchpop wrote:
         | unison cheats at this by being interpreted, if it weren't it
         | would have the same build times as everyone else (I suppose
         | caching would be better in some cases, but i'm not sure if
         | they'd be much better than what you get with nix?)
        
         | vlovich123 wrote:
         | Webpack clean: 3 minutes
         | 
         | Incremental: 700ms
         | 
         | Esbuild clean: 100ms
        
         | wglb wrote:
         | Sbcl compiles itself in just a couple of minutes.
        
           | adenozine wrote:
           | I've always enjoyed seeing how it build its components faster
           | and faster during the bootstrap. One of those simple joys of
           | the make process.
        
         | adamrezich wrote:
         | Jonathan Blow's new language compiler does not do incremental
         | builds yet builds large projects extremely quickly (a few
         | seconds).
        
       | adamddev1 wrote:
       | Can anyone explain why C++ beats out C so clearly?
        
         | dorinlazar wrote:
         | C++ is just a stricter C - if you write proper C, it will be a
         | correct C++ program, therefore it would run at the same speed.
         | There's no way C can be other than marginally faster than C++ -
         | if you write C and compile it with a C++ compiler.
         | 
         | Then you have various optimizations that come from things like
         | templates, constexpr. Look at the qsort method - to have a
         | modicum of genericity you need to pass a function that will be
         | invoked. Instead, a C++ version would be able to be optimized
         | to compare the elements directly, without calling a function,
         | avoiding a big number of function calls.
         | 
         | There's no way to make C faster than well written C++. But C++
         | is complex and a lot of people write bad C++, that's why it has
         | developed a bad reputation. It's really undeserved.
        
           | ReleaseCandidat wrote:
           | > There's no way C can be other than marginally faster than
           | C++
           | 
           | In theory `restrict` (to disallow pointer aliasing like in
           | Fortran) could lead to optimization gains, same as not having
           | exceptions (the possibility of exceptions prohibits certain
           | optimizations of the C++ compiler).
        
             | dorinlazar wrote:
             | I agree; of course, both clang and gcc extend C++ with
             | __restrict__ if I remember correctly; from what I know,
             | there are more things that can be done in C++ to improve
             | over C than the other way around.
        
         | ReleaseCandidat wrote:
         | There is no reason, the fasted C++ code is actually C - I
         | really don't know why somebody wrote that as C++.
         | 
         | https://github.com/hanabi1224/Programming-Language-Benchmark...
         | 
         | https://github.com/hanabi1224/Programming-Language-Benchmark...
        
         | sdfdf4434r34r wrote:
         | Because it's not C++ code:
         | https://github.com/hanabi1224/Programming-Language-Benchmark...
        
         | khoobid_shoma wrote:
         | I think the C code in the benchmarks are not really efficient.
         | An experienced C programmer does not simulate C++ style memory
         | management if speed matters.
        
         | mwattsun wrote:
         | I was curious too and found this:
         | 
         | https://stackoverflow.com/questions/2513741/c-vs-c-for-perfo...
         | 
         | from which I gather that if you're using an allocator function
         | that zero's out memory then it will be slower of course.
        
       | rjsw wrote:
       | Not very impressed with those Lisp benchmarks that I examined.
        
         | dybber wrote:
         | Not the Ocaml ones either. Not sure this makes for a fair
         | comparison.
        
       | debesyla wrote:
       | I wonder why wasn't PHP included? It is still very popular in
       | websites... Is it somehow not comparable/not worth to compare?
        
       | medo-bear wrote:
       | Common Lisp (sbcl) performance via native implementation of simd
       | [0] is very impressive ! It is litteraly acheieving C/Cpp speeds
       | (within few ms). Great work by Marco Heisig
       | 
       | [0] https://github.com/marcoheisig/sb-simd
        
       | clmay wrote:
       | I've been meaning to write a new frontend for the Computer
       | Language Benchmarks Game that allows comparison of arbitrary
       | pairings for a few years now and never got 'round to it.
       | 
       | It's about time someone did. Thank you and nice work!
        
       | savant_penguin wrote:
       | What makes zig so memory efficient?
        
         | coldtea wrote:
         | Manual memory management a la C, and easy access to custom
         | allocation (arenas, etc) that usually aren't used in most C
         | code (and C doesn't offer as allocators by default).
         | 
         | Add compile time processing which can get pretty far into
         | reducing all kinds of otherwise runtime ops...
        
       ___________________________________________________________________
       (page generated 2021-12-18 23:01 UTC)