[HN Gopher] Julia: Faster than Fortran, cleaner than Numpy
       ___________________________________________________________________
        
       Julia: Faster than Fortran, cleaner than Numpy
        
       Author : cs702
       Score  : 153 points
       Date   : 2021-06-20 15:34 UTC (7 hours ago)
        
 (HTM) web link (www.matecdev.com)
 (TXT) w3m dump (www.matecdev.com)
        
       | woeirua wrote:
       | Can someone explain how Julia MT got a super-linear improvement
       | in performance by using 4 cores + HT vs 1 core?
       | 
       | Fortran's MT performance is more along the lines of what I would
       | expect.
        
       | johan_felisaz wrote:
       | I think most of the criticisms against Julia come from the fact
       | that people see it as a Python competitor. In its actual state
       | (even if the last version improved compile time a bit), it is not
       | really usable as an exact python replacement (short scripts,
       | opening a new interpreter each time).
       | 
       | However it is a revolution for the HPC crowd. If your program is
       | supposed to run for several days on multiple nodes, 10 seconds of
       | compilation is a negligible price to pay for a much cleaner and
       | shorter code than Fortran or C/C++.
       | 
       | I've been using Julia for a few months for physics related
       | simulations and it's just so much better than Fortran to work
       | with.
        
         | eigenspace wrote:
         | While I definitely agree, I'd say Julia is totally fine for
         | short scripts too. Just spin up
         | https://github.com/dmolina/DaemonMode.jl and you can launch
         | lots of short scripts without excessively restarting julia.
         | 
         | There's also PackageCompiler.jl
         | https://github.com/JuliaLang/PackageCompiler.jl for AOT
         | compilation, but that's heavy enough that I wouldn't bother
         | with it for short scripts.
        
           | johan_felisaz wrote:
           | Of course, these packages are fantastic, they do help a lot
           | for more experienced users. But for a pure beginner, there is
           | a high risk he will give up Julia after 10s of Plots.jl
           | compilation, way before we would hear about these.
           | 
           | (and tbh, even leaving a REPL open all the time works
           | perfectly fine, since include("myscript.jl") really does
           | reload the script, unlike the import keyword in Python)
        
             | eigenspace wrote:
             | Yeah, that's fair. I guess I read your comment as saying
             | that julia isn't a good replacement for Python, rather than
             | it takes a tiny bit of learning when you're coming from
             | other languages.
             | 
             | I think the way we talk about julia sometimes makes Python
             | and Matlab programmers think they can just take some of
             | their old code and copy paste it into julia, switch around
             | some keywords and have everything be faster. It's important
             | to emphasize to these people that it's a different language
             | with its own idioms that need to be learned.
        
       | agumonkey wrote:
       | Interesting to see.. for so many years julia was so much in a
       | blur.
        
       | frr149 wrote:
       | What makes Julia so fast?
        
         | eigenspace wrote:
         | Being designed to be fast. Lots of other high level languages
         | were designed with the assumption that performance critical
         | code would be written in low-level languages so they didn't
         | bother designing the language to make various optimzations
         | possible.
         | 
         | Julia was designed for people who don't want to use a low level
         | language for speed. It took a lot of great ideas from languages
         | like Common Lisp, Fortran, etc. as well as a few good ideas of
         | its own and packaged them together in a rather novel way.
         | 
         | The language is dynamic, but it's dynamism is limited in
         | specific ways that makes it very easy for the compiler to
         | optimize code.
        
         | jakobnissen wrote:
         | Functions compile just-ahead-of-time to native code using LLVM.
         | Functions specialize based on the input types used and use type
         | inference, meaning that in idiomatic Julia code, all types are
         | known at compile time. Finally, Julia uses struct layout
         | similar to static languages like C or Rust.
         | 
         | The trifecta of complete type information, compilation to
         | native code, and efficient data layout, is what makes the
         | difference.
         | 
         | Notably, Julia is not any faster than most static languages,
         | nearly all of whom also use the same trio.
        
       | nxpnsv wrote:
       | Is gfortran considered a good fortran compiler? Back in my day, I
       | think it still generated C code that then was compiled with gcc.
       | I might be wrong, this was in the 90s...
        
       | rahimiali wrote:
       | You can speed up the numpy code by 2x by replacing the line:
       | M_old = np.exp(1j*k*(np.tile(a,(N,1))**2 +
       | np.tile(a.reshape(N,1),(1,N))**2))
       | 
       | with                   a_squared = a ** 2         M = np.exp(1j *
       | k * (a_squared[:, None] + a_squared[None, :]))
       | 
       | This gets you to the same performance as single-threaded julia.
        
       | rickard wrote:
       | I recently (last week) started using numba, for similar reasons
       | to why the author seems to like Julia. I tested translating his
       | example to numba:                 @numba.njit(parallel=True,
       | fastmath=True)       def w(M, a):           n = len(a)
       | for i in numba.prange(n):               for j in range(n):
       | M[i,j] = np.exp(1j*k * np.sqrt(a[i]**2 + a[j]**2))
       | 
       | and timed it like this:                 %%timeit       n = len(a)
       | M = np.zeros((n,n), dtype=complex)       w(M, a)
       | 
       | On my 8-core system, this ends up more than 10x as fast as the
       | numpy version he listed (which seems to lack the sqrt, though),
       | which would place it close to the multithreaded Julia, even
       | considering that ran it on a 4-core system. As an added bonus, it
       | can also pretty much automatically translate to GPU as well.
        
         | moelf wrote:
         | the raising of numba shows us why numpy and "just write
         | vectorize-styled code with a C++ backend" is not enough.
         | 
         | yet Numba basically makes your python code not python. It
         | doesn't support so many things: pandas dataframe, or even as
         | simple as a dict(), which means you often have to manually feed
         | your numba function separate arguments.
         | 
         | To separate a complicated calculation into numba-infer-able
         | parts and the not ones is not fun and sometimes just
         | impossible.
        
           | rickard wrote:
           | Yep, completely agree. For the project I'm currently doing,
           | it seems like a fairly good fit though. Lots of prototyping
           | different approximations, and needs to be faster than plain
           | numpy.
           | 
           | Also, the jitclass things help somewhat. I use them as plain
           | data containers, to work around the hideously long argument
           | lists that otherwise would be required, but with no methods.
           | jitclass breaks the GPU option, though.
        
           | garyrob wrote:
           | Actually, Numba does support dicts now.(You can't have a mix
           | of types in the dicts unless that's changed, but that isn't
           | an actual problem for most ML work.) I have used numba very
           | effectively to make my machine learning research projects run
           | very quickly. I don't use pandas; I do use a lot of numpy and
           | scipy. I understand that pandas can use numpy arrays for at
           | least some things. Since numba works great with numpy, it
           | seems like that might be an approach for using it with
           | pandas, in at least some cases.
        
           | N1H1L wrote:
           | I have personally experimented quite a lot with numba. When
           | it works, it's great. However, numba can have very cryptic
           | error messages making it difficult to debug.
           | 
           | Which is why I switched to dask, which even though slower
           | integrates better with numpy.
        
       | dfdz wrote:
       | This example is not really testing Fortran vs. Julia, but rather
       | testing the speed of the math library you linked to.
       | 
       | The most expensive part of the fortran code is evaluating sqrt
       | and exp.
       | 
       | For example on a 5600X the current Fortran code (with n=10 000)
       | takes about 2.7 seconds. If I change the definition of M to
       | remove the special functions (M(i,j) = a(i)+ a(j) then it takes
       | .3 seconds. (Using gfortran with -Ofast)
       | 
       | When the special functions are the issue, using the Intel Fortran
       | compiler and Intel MKL will make the code about 2x faster (this
       | is what Matlab or Julia is probably using)
        
         | celrod wrote:
         | `vsqrtpd` is an assembly instruction. `exp` and `sincos` are
         | implemented in Julia.
         | 
         | If you want to use a library to speed up the Julia code,
         | checkout https://discourse.julialang.org/t/i-just-decided-to-
         | migrate-... (but replace `@tvectorize` with `@tturbo`), which
         | is code by eigenspace.
        
         | moelf wrote:
         | Julia ships with OpenBLAS, in some cases there are pure-Julia
         | "blas-like" routine that can be as fast:
         | 
         | https://github.com/mcabbott/Tullio.jl
        
           | celrod wrote:
           | Or much faster (at small sizes):
           | https://github.com/JuliaLinearAlgebra/Octavian.jl
        
             | eigenspace wrote:
             | Also much faster at very large sizes :)
        
       | elcomet wrote:
       | And slower (to start) than both of them. I tried julia to write
       | some scripts that were previously in numpy, but the slow start
       | threw me off. I hope they solve this because it's a great
       | language.
        
         | eigenspace wrote:
         | The startup times have improved a lot in version 1.6, the
         | latest release, and are I Proving again in the next upcoming
         | release candidate.
         | 
         | Are you hitting the startup time very often? If so, you might
         | want to try some things to keep one julia session open and
         | sending code to it like a daemon instead of constantly closing
         | and opening sessions.
         | 
         | This package makes that workflow really easy:
         | https://github.com/dmolina/DaemonMode.jl
        
         | dosshell wrote:
         | Same for me. Ported my calculations to Python from Julia. Even
         | before Julia has started python was done. And Julia also did
         | jit compilation in runtime. A function call could take seconds
         | on first call.
        
           | moelf wrote:
           | if it's that fast (<O(1s)) and your use-case is to call it
           | many many times from ground-up, you sure should use Python or
           | even bash since there's literally no performance to speak of.
           | (i.e. nothing really changes the usability anyways)
        
             | kzrdude wrote:
             | Julia startup can easily be minutes with big libraries
        
               | moelf wrote:
               | `precompile` is a one-time thing.
               | 
               | Actually loading them when running scripts is O(s) even
               | for some of the biggest library (Plotting, Differential
               | equations etc.)
        
         | jlamberts wrote:
         | I have also recently been playing around with Julia coming from
         | numpy/pandas. I found that 1.6.1 felt way smoother on startup
         | and import than 1.5.x. Might be worth trying to upgrade if you
         | haven't already. At the very least it's encouraging that it
         | seems like they're making progress in that direction.
        
         | bobbylarrybobby wrote:
         | Unfortunately Julia is a "repl-based language". You are
         | expected to open a repl and work in it for as long as possible
         | so that your compilations all remain cached. Packages like
         | Revise can help reevaluate code that changes so that you don't
         | have to leave the repl.
         | 
         | I do wish that Julia would start up in an interpreted mode and
         | compile in the background so that it would be fast enough when
         | first opened and then attain maximum speed later on. (I think
         | this is how JavaScript engines work?)
        
           | eigenspace wrote:
           | You can have what you want here by using DaemonMode.jl
           | 
           | https://github.com/dmolina/DaemonMode.jl
        
             | snicker7 wrote:
             | It would be nice if their was an "offline version" of this,
             | e.g. automagically cache all compiled code to disk. Daemon
             | mode isn't restart-friendly.
        
               | adgjlsfhk1 wrote:
               | That's basically what PackageCompiler does. That said,
               | StaticCompiler.jl is expected to improve the state of
               | this in the future.
        
           | Sanguinaire wrote:
           | I really wish there was an option for better non-REPL usage -
           | basically Cargo for package installation and ahead-of-time
           | compilation, but with the Julia syntax and ethos.
        
           | tsimionescu wrote:
           | > (I think this is how JavaScript engines work?)
           | 
           | Yes, modern V8 does this, as does the JVM. Common Lisp
           | runtimes also often support this, though I think they usually
           | leave it up to the programmer to choose when to compile a
           | function, they don't always do it automatically. The .NET CLR
           | also behaves like Julia - JIT compile on first execution of
           | every function.
        
         | davrosthedalek wrote:
         | I thought Julia is a compiled language. Surprising that the
         | startup is then so slow.
        
           | acmj wrote:
           | Think about compiling a large C++ program and its
           | dependencies every time you run the generated executable.
           | This is sort of like what is Julia is doing.
        
           | najtdale wrote:
           | JIT compiled. Has a warmup time.
        
             | pletnes wrote:
             | It's just-ahead-of-time compiled and no <<interpreted
             | mode>> so it's always slow to start, if I understand
             | correctly. Really annoying, especially the first time you
             | use it.
        
               | cbkeller wrote:
               | If you need to run small scripts and can't switch to a
               | persistent-REPL-based workflow, you might consider
               | starting Julia with the `--compile=min` option. If you
               | know which packages you're going to need, you can also
               | reduce startup times dramatically by building a sysimg
               | with PackageCompiler.jl
               | 
               | There is also technically an interpreter if you want to
               | go that way [1], so in principle it might be possible to
               | do the same trick javascript does, but someone would have
               | to implement that.
               | 
               | [1] https://github.com/JuliaDebug/JuliaInterpreter.jl
        
             | davrosthedalek wrote:
             | Ah, I see. That explains a lot. Could also be the reason
             | why it beats gfortran in the test. A JIT knows more about
             | the data the program sees/runtime behavior than a normal
             | compiler. Still surprising. Or is gfortran just so much
             | behind state of the art fortran compilers?
        
               | eigenspace wrote:
               | Not this test. Julia's JIT compiler does not know about
               | runtime values, it's not a tracing JIT like the ones you
               | may be used to. Instead, it's 'just ahead of time',
               | compiling a specialized executable for each new function
               | signature the first time that signature gets hit. I.e. if
               | I do                   f(x) = 2x + 1
               | 
               | The first time I call                   f(1)
               | 
               | it'll compile a function specialization for f( ::Int),
               | and the first time you call f on that integer it'll be
               | slow and all the subsequent calls will be equally fast.
               | 
               | Next if you do                   f(1.0 + 2im)
               | 
               | it'll compile a new specialization for
               | f(::Complex{Float64}) which will be slow the first time
               | while it compiles and then fast on all the subsequent
               | runs.
               | 
               | The genius of Julia's design is that the JIT compiler is
               | designed around the semantics of multiple dispatch, and
               | the multiple dispatch semantics are designed around
               | having a JIT
        
       | ianhorn wrote:
       | This an unfair example of trying to make the numpy readable:
       | n = len(a)         M = np.exp(1j*k*(np.tile(a,(n,1))**2 +
       | np.tile(a.reshape(n,1),(1,n))**2))
       | 
       | This has a common issue in functional programming and there's an
       | easy trick to fix it. Change the big one liner by naming things:
       | n = len(a)         A = np.tile(a,(n,1))         A_T =
       | np.tile(a.reshape(n,1),(1,n))         M = np.exp(1j*k*(A**2 +
       | A_T**2))
       | 
       | Much easier to read now, and it even exposes an error (the sqrt
       | is missing!). Before we fix that, let's make the obvious
       | simplification of using the .T for transpose:                   n
       | = len(a)         A = np.tile(a,(n,1))         M =
       | np.exp(1j*k*(A**2 + A.T**2))
       | 
       | Then let's use numpy's broadcasting to be more idiomatic:
       | A = a[:, np.newaxis]         M = np.exp(1j*k*(A**2 + A.T**2))
       | 
       | Now let's fix the bug:                   A = a[:, np.newaxis]
       | M = np.exp(1j*k*np.sqrt(A**2 + A.T**2))
       | 
       | or even                   A = a[:, np.newaxis]
       | root_sum_squares = np.sqrt(A**2 + A.T**2)         M =
       | np.exp(1j*k*root_sum_squares)
       | 
       | I think the last two of these are fairer comparisons.
       | 
       | --------
       | 
       | And to pitch the hype train for jax: if you want to fuse the
       | loops and/or get gpu/tpu for free, just swap np for jnp:
       | A = a[:, jnp.newaxis]         M = jnp.exp(1j*k*jnp.sqrt(A**2 +
       | A.T**2))
        
         | eigenspace wrote:
         | The julia implementation was also suboptimal to be fair. I sped
         | it up considerably here:
         | 
         | https://discourse.julialang.org/t/i-just-decided-to-migrate-...
         | 
         | and then even further here:
         | 
         | https://discourse.julialang.org/t/i-just-decided-to-migrate-...
         | 
         | I'd actually be very interested if there's anything other than
         | handwritten assembly that's faster than the second one I
         | posted.
         | 
         | ______________
         | 
         | Regarding your more readable version of the Numpy syntax
         | though, I think you have to admit it's still not as readable as
         | for i in 1:N             A[i,j] = exp((100+im)*im*sqrt(a[i]^2 +
         | a[j]^2))         end
         | 
         | right?
         | 
         | I'll also note that the author of the post didn't come from the
         | Julia community. He was a heavy user of Python and Fortran,
         | calling Fortran from Python to speed up performance hotspots.
         | He wrote this blogpost as his first introduction to Julia.
         | 
         | So I think if it does cast numpy in a poor light, it was not
         | done so intentionally to make it look bad.
        
           | ianhorn wrote:
           | Regarding your performance work, you've nerd sniped me into
           | looking for analytical tricks to speed it up ;) We'll see...
           | 
           | Regarding the readability, I suppose it's a matter of taste
           | at this point, but if I swapped `import numpy as np` for
           | `from numpy import newaxis, exp, sqrt`, then IMO, the numpy
           | is more readable:                   A = a[newaxis]         M
           | = exp(1j * k * sqrt(A**2 + A.T**2))
           | 
           | But then my tastes also run towards preferring the
           | handwritten definitions to be in terms of vectors and
           | transposes instead of elements and indices, at least until
           | you get to many indexed tensors.
           | 
           | Side note: any idea why the python and fortran are exp(i k...
           | while the julia is exp((100 +i) i...)? Is it something I
           | overlooked?
        
             | eigenspace wrote:
             | > Regarding your performance work, you've nerd sniped me
             | into looking for analytical tricks to speed it up ;) We'll
             | see...
             | 
             | Looking forward to it! These microbenchmarks are very fun
             | to explore.
             | 
             | > Side note: any idea why the python and fortran are exp(i
             | k... while the julia is exp((100 +i) i...)? Is it something
             | I overlooked?
             | 
             | Oh, that is a partially applied edit I guess. When the
             | author posted on the julia forum, people pointed out that
             | since the exponent was pure imaginary, it could be speeded
             | up even more with cis(...) instead of exp(im * ...) but
             | then the author claimed that exp((100 + im) * ...) was more
             | representative of his actual workflow and I guess changed
             | the julia version in his blogpost but not the Python or
             | Fortran versions.
        
               | ianhorn wrote:
               | I got bored with trying to find an analytical boost, but
               | I benchmarked a couple IMO super readable python versions
               | (basically what's in my original comment after making the
               | (100+i)i change):
               | 
               | https://colab.research.google.com/drive/1ABrZJlm8pwB6_Sd6
               | ayO...
               | 
               | On my macbook, using XLA's jit in python gave about a
               | 12-15x speedup on CPU over OP's solution, which was
               | pretty cool, but I'm too lazy to figure out how to
               | install and benchmark Julia on my machine. Applying a
               | 12-15x speedup would at least beat the Julia MT solution
               | in OP, and you've got to admit `exp(CONST * sqrt(A**2 +
               | A.T**2))` is a pretty clean way to do it.
               | 
               | Then I ran on whatever GPU colab decided to give me (a
               | P100), and for just adding a decorator, it's a
               | 1000x-1900x speedup (better as n goes up). Hence my
               | current honeymoon period with jax. I love the speed vs
               | readability tradeoff.
        
               | celrod wrote:
               | > I'm too lazy to figure out how to install and benchmark
               | Julia on my machine.
               | 
               | Assuming you're eager to try once you've found out:
               | 
               | You should be able to simply download and unpack a binary
               | from: https://julialang.org/downloads/ I'd strongly
               | recommend going with the current stable release (1.6.1).
               | 
               | To start the Julia REPL:                 bin/julia
               | 
               | To install packages:                 using Pkg
               | Pkg.add("BenchmarkTools")
               | Pkg.add("LoopVectorization")
               | 
               | then you can copy/paste code from eigenspaces link to
               | discourse to run Julia benchmarks. E.g., you can
               | copy/paste from https://discourse.julialang.org/t/i-just-
               | decided-to-migrate-... if you define
               | using LoopVectorization       const var"@tvectorize" =
               | var"@tturbo"
               | 
               | (because `@tvectorize` has since been renamed on the
               | latest releases)
        
           | molticrystal wrote:
           | You can print out the assembly it generated with code_native
           | [0] and check if it is seems there are superfluous
           | instructions.
           | 
           | To really tune things you'd have to check the instruction
           | timings among other things[1] for your processor and to be
           | fair apply the equivalent setting for your compiler back end
           | as well if it supports tuning for your processor and anything
           | else applicable.
           | 
           | While it is likely there is a possibility for improvement,
           | like most optimization work, it is usually for diminishing
           | returns for your time and may be small in nature, but
           | sometimes compilers, even those like llvm, do something
           | strange, so you never know for sure unless you check.
           | 
           | [0] https://docs.julialang.org/en/v1/stdlib/InteractiveUtils/
           | #In...
           | 
           | [1] https://www.agner.org/optimize/
        
         | moelf wrote:
         | >if you want to fuse the loops and get gpu/tpu for free
         | 
         | the work being done at JAX is great and Julia shares many of
         | the goals (Julia would even be the pioneer in some cases). The
         | best part is your library doesn't even need numpy/JAX as
         | dependency. Yet your function that works on AbstractArray will
         | work on arrays that live on GPU/TPU, for free. thanks to multi-
         | dispatch, you don't need to write a ton of boiler plate code
         | for interface and inherent some classes from JAX/numpy.
        
       | cycomanic wrote:
       | Just another data point. I've just tested pythran (a numeric
       | python to C++ compiler) with the code from
       | https://discourse.julialang.org/t/i-just-decided-to-migrate-...
       | 
       | On my machine the julia code on my PC is:                 N =
       | 2000         88.273 ms (8 allocations: 61.04 MiB)         49.445
       | ms (3 allocations: 61.05 MiB)         14.599 ms (2 allocations:
       | 61.04 MiB)
       | 
       | If I compile the following code with pythran:
       | #pythran export texp2(int)        def texp2(N):            a =
       | np.linspace(0,2*np.pi,N)            return
       | np.exp(1j*100*(a[:,np.newaxis]**2+a[np.newaxis,:]**2))
       | 
       | I get:                 timeit texp2(2000)       12.3 ms +- 27.1
       | us per loop (mean +- std. dev. of 7 runs, 100 loops each)
       | 
       | So actually slightly faster than the optimized julia code.
        
         | eigenspace wrote:
         | How does Pythran fare against the code in this link:
         | 
         | https://discourse.julialang.org/t/i-just-decided-to-migrate-...
         | 
         | The author made some julia performance mistakes in his
         | benchmarks.
         | 
         | The code in this version:
         | 
         | https://discourse.julialang.org/t/i-just-decided-to-migrate-...
         | 
         | should be _very_ tricky to beat, and I suspect that the only
         | way is doing a _lot_ of customized assembly.
        
           | cycomanic wrote:
           | That is the code I'm comparing against. So the fastest Julia
           | version is actually the one using @tturbo in your link. I'm
           | not surprised, pythran gets me the best performance across
           | numba, customized cython and (for some comparisons) julia in
           | most of my tests.
        
             | eigenspace wrote:
             | Ah very interesting, this is great to know! Sorry I didn't
             | properly read your initial comment. I'm now quite
             | interested to see what we can do to match or beat Pythran
             | here.
             | 
             | What CPU are you using? I know my CPU, a Zen+ Ryzen 5 2600,
             | is one of the worst modern CPUs for this sort of thing and
             | there's much better scaling on Intel CPUs or the Zen 3
             | CPUs, but those things are probably also effecting the
             | Pythran code as well.
             | 
             | Is Pythran any good at producing BLAS microkernels?
        
       | gnufx wrote:
       | The Fortran doesn't look quite equivalent, and we don't know how
       | it was run, though that's irrelevant to the analysis except
       | perhaps for OpenMP. As usual, it's a single number from a
       | benchmark without understanding it, e.g. with -fopt-info. The
       | general conclusion about Fortran invalid.
       | 
       | It happens to be an example of a pathological vectorization case
       | for GCC with an ancient issue open, sigh [1]. Much as it pains me
       | to say so, apart from NAG's, I think, all the other compilers I
       | know will vectorize such loops: ifort, xlf, flang, PGI, nvfortran
       | (but the last three may be essentially the same here).
       | 
       | Also, last time I tried it, -floop-nest-optimize (marked
       | "experimental") was quite broken, again with an issue open either
       | in the gcc or redhat tracker. -O3 does unroll-and-jam, at least.
       | 
       | 1. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40770
        
       | cycomanic wrote:
       | Another thing to add, the python example code is a written in
       | quite a weird (and in my opinion difficult to read way).
       | 
       | n = len(a) M = np.exp(1j _k_ (np.tile(a,(n,1))*2 +
       | np.tile(a.reshape(n,1),(1,n))*2))
       | 
       | is equivalent to
       | 
       | M = np.exp(1j _k_ (a[:,None]*2 + a[None,:]*2))
       | 
       | which I find very easy to grasp.
        
       | walshemj wrote:
       | Question is which Fortran vs Which Version of Julia
        
       | kelsolaar wrote:
       | The article is surprising as it illustrates the reason for
       | switching to Julia with only a very simplistic single example.
       | Surely there is more to that than this example especially
       | considering that the Python implementation could be improved for
       | readability and speed.
        
       | sdfhbdf wrote:
       | Maybe a nitpick but why would someone put time on Y axis? I
       | remember it's DRYMIX for graphs - dependent on Y axis (N),
       | independent on X axis (time)
        
         | bramblerose wrote:
         | Because what's measured is how how the runtime (on the Y axis)
         | increases as the matrix size N (on the X axis) increases (i.e.,
         | you set N and measure t, rather than setting t and measuring N)
        
         | [deleted]
        
       | vslira wrote:
       | I just wish the Julia community wasn't so invested in using UTF8
       | symbols in the code. I understand that for Physicists, Engineers,
       | Mathematicians, etc it look cleaner and more familiar, but when
       | writing code that is meant to be maintained and read over and
       | over agains it makes a difference if that u looks too much like a
       | u sometimes. What's so bad about just writing mu?
       | 
       | Anyway, this is not a complaint about the language (which I like
       | very much), just a dislike of the popular usage
        
         | dkarl wrote:
         | Physicists, mathematicians, and engineers have been writing u
         | and u on chalkboards for years with no issues. Why should it be
         | harder on a screen?
        
           | dnautics wrote:
           | It's not the screen so much as the keyboard that is the
           | problem
        
             | dkarl wrote:
             | Code is read more often than it's written, even scientific
             | computing code.
        
               | dnautics wrote:
               | agree, that doesn't make it not a problem.
        
             | eigenspace wrote:
             | They're very easy to type in Julia's repl, or any text
             | editor with Julia support.
             | 
             | You just type \mu and then hit the tab button and you get
             | m. I actually switch over to my Julia REPL all the time
             | when I'm emailing someone and want to type a greek
             | character.
        
               | fnord77 wrote:
               | that's two extra strokes than just "mu"
        
               | dnautics wrote:
               | > with Julia support.
               | 
               | So you can see how this becomes a bit of a barrier to
               | entry.
        
               | pjmlp wrote:
               | Anyone that doesn't use a language aware editor is making
               | themselves a disservice.
        
               | dnautics wrote:
               | that's not really acceptable. I'm a big julia fanboy,
               | have been since 0.4 (proof in bio) but dismissing real
               | pain felt by newcomers is kind of bad for the ecosystem.
        
               | manwe150 wrote:
               | What editor do you use, if I may ask, and have you asked
               | them to add unicode support? Part of the goal in
               | permitting unicode variable names is that more and more
               | niche editors might be forced to become fluent in
               | languages other than english, benefits everyone (and
               | might be a bit overdue at this point).
               | 
               | I use vim, and have a completions plugin (part of julia-
               | vim mode), so that I can write \mu<tab> for any unicode
               | name, which makes it as easy to type as "mu", but easier
               | for me to read (as it matches my textbook expectation of
               | the formula).
        
               | dnautics wrote:
               | It's not a display issue. Do I want to type ctrl-u
               | <whatever hex> or copy-paste if I want to write some code
               | that calls a function when I'm playing around with a new
               | library? Well those are the two strategies I've used with
               | experience... But some person coming new to Julia will
               | grab some package, and then see some function call and
               | get deer in the headlights "how the hell do i do this".
               | 
               | And LaTeX codes are not always obvious. For example it's
               | \triangleleft but \leftarrow
        
               | eigenspace wrote:
               | I'm not sure I'd want to do any serious coding in an
               | editor that doesn't have the most basic support for the
               | language I'm using, but to each their own I guess.
               | 
               | For new users, if they're not used to writing unicode
               | characters and don't have an editor with nice support,
               | then they don't have to write them.
               | 
               | The julia community has a pretty strong convention of
               | giving both unicode and ascii versions of almost all
               | functions. Some packages don't have this convention, but
               | I'm not aware of any big important ones that don't follow
               | it.
        
             | gnufx wrote:
             | If Emacs doesn't have enough input methods, you can add one
             | reasonably easily... A good deal of early-ish work on
             | multilingual support in Emacs was done out of interest in
             | editing technical text, which is basically the same problem
             | as more-or-less complex natural language scripts (which
             | deserve support anyway, of course).
        
           | Traster wrote:
           | How exactly do you type mu? On a chalkboard writing the mu
           | sign is trivial, on a computer... well I don't have a mu
           | button.
           | 
           | Edit:
           | 
           | There are like 5 replies to this mentioning different ways of
           | doing it (including memorizing 3 digit unicode values) none
           | of which seem more intuitive than writing "mu".
        
             | qwertox wrote:
             | On Windows you can use Autohotkey and map it to "Right
             | Windows Key" + u. Or use "Right Alt Key"+ "Numpad 230" ALT
             | Code, https://usefulshortcuts.com/alt-codes/greek-alt-
             | codes.php)
        
             | tikhonj wrote:
             | I just do \mu (or ;mu) in Emacs.
             | 
             | We're in the 21st century, all the tools we use should have
             | reasonable Unicode support. The fact that we collectively
             | keep on talking about typing non-ASCII characters as a real
             | problem is a pretty depressing reflection of our shared
             | computer infrastructure :(.
        
             | datastoat wrote:
             | For me: alt+enter (switches to Greek keyboard layout), m,
             | alt+enter (back to usual layout). I use Greek symbols in
             | Python and in Javascript. I do it almost without thinking
             | now, as lots of my coding is tightly linked to maths.
        
               | enriquto wrote:
               | you can map a useless key like capslock to do that in a
               | single step. Thus to type m, for example, you simply
               | press CAPS+m, and similarly for all greek letters
        
             | [deleted]
        
             | datastoat wrote:
             | > none of which seem more intuitive than writing "mu"
             | 
             | Write for the convenience of the reader, not of the writer!
        
             | leephillips wrote:
             | I type <Greek>-m. One extra keypress, just like when I type
             | e, u, c, c, p, etc. If you type in more than one language,
             | or math, your keyboard is already set up to handle this.
        
             | moelf wrote:
             | in Julia:
             | 
             | \mu<tab>
        
             | jnxx wrote:
             | I use on Linux the XCompose key (which is assigned the menu
             | key right from the space bar, or the "Print Screen" key),
             | and then "\mu" like in LaTeX. I don't need to think about
             | it because I have used LaTeX for years. I also use the
             | XCompose to generate EUR and ss which are special symbols
             | in my native language. This has the advantage that I can
             | use the UK International layout for everything, which makes
             | specifically writing code much more comfortable. Also, the
             | xcompose keymap is just a text file in my home directory, I
             | can extend it at any time and take it with me. The method
             | is also independent from any application.
             | 
             | Also, Emacs has an input method which supports "\mu", and
             | this convention is also used in Racket for things like l.
        
             | nerdponx wrote:
             | Vim/Neovim makes it very easy to input all kinds of
             | interesting Unicode, using the "digraph" system. A lot of
             | other code editors have similar functionality, either
             | through keystrokes or some kind of "palette" interface. And
             | Julia REPL itself has Tex-like escape sequences.
             | 
             | I agree that we need better system-wide tools for arbitrary
             | Unicode input. Font support and confusable glyphs are also
             | issues. But I think these are solvable problems, and they
             | will be good problems to have solved.
        
           | giantrobot wrote:
           | Besides the good points about the difficulty _typing_
           | characters, not all fonts have good glyphs for non-
           | alphanumeric (by that I mean low ASCII) characters. Sometimes
           | they don 't have those glyphs at all and the system has to go
           | to a fallback font. Terminals can also shit the bed with
           | Unicode characters.
           | 
           | The solutions to those problems is to use better fonts,
           | terminals, and editors. Using Unicode characters is fine but
           | some people do have legitimate problems with them.
        
             | dragonwriter wrote:
             | > Besides the good points about the difficulty typing
             | characters, not all fonts have good glyphs for non-
             | alphanumeric (by that I mean low ASCII) characters
             | 
             | Not all fonts are good fonts for an evey use use; heck, not
             | all fonts make 0 and O or 1 and l and I clearly
             | distinguishable.
             | 
             | > The solutions to those problems is to use better fonts,
             | terminals, and editors
             | 
             | Exactly.
        
               | [deleted]
        
             | cbkeller wrote:
             | On that topic, I'll throw in a shoutout to Cormullion's
             | excellent JuliaMono font [1,2]!
             | 
             | Typing certainly does add more friction between u and u
             | than a chalkboard would, though it is perhaps also notable
             | that mathematicians seem to have felt that it's worth the
             | effort, and have come up with a quite nice system for it in
             | LaTeX -- and I would certainly not mind seeing more
             | languages and editors add support for LaTeX completions the
             | way the Julia ecosystem has.
             | 
             | [1] https://juliamono.netlify.app/
             | 
             | [2] https://github.com/cormullion/juliamono
        
           | 1270018080 wrote:
           | Because of keyboards?
        
           | yellowcake0 wrote:
           | The difference is that form is important in mathematics and
           | not very important in programming. Therefore symbols are used
           | because it makes it easier for the eye to recognize patterns.
           | 
           | For example these equations _look_ similar,
           | 
           | x^2+4x+5=0
           | 
           | y"+4y'+5=0
           | 
           | Even though one is a polynomial equation and the other is a
           | differential equation, the common visual pattern suggests
           | that the techniques needed to solve them may be similar.
           | 
           | On the other hand in programming there's no need to do this,
           | all the math has already been worked out on paper, so it's
           | better to use clear, distinct, easy to type variable names.
        
             | Buttons840 wrote:
             | > form is important in mathematics and not very important
             | in programming
             | 
             | How important is form when I'm programming mathematics?
        
               | yellowcake0 wrote:
               | > How important is form when I'm programming mathematics?
               | 
               | Not important.
        
               | eigenspace wrote:
               | As a physicist who does a lot of mathematical programming
               | for my every day job, I'm going to disagree pretty hard.
        
               | yellowcake0 wrote:
               | With how I've defined form, I'm not sure how you could.
               | 
               | Who in their right mind does math on a computer, math is
               | _implemented_ on a computer, but it 's done by hand.
        
               | eigenspace wrote:
               | > Who in their right mind does math on a computer, math
               | is implemented on a computer, but it's done by hand.
               | 
               | Maybe you think this becuase your used to writing math on
               | computers that make writing that math ugly and obtuse?
               | 
               | Besides, having the math you've written by hand closely
               | match the syntax you use on the computer can greatly
               | reduce the cognitive burden of switching between code and
               | paper.
        
               | plafl wrote:
               | I'm sure there is so much room for improvement here. The
               | closest thing for me is using TeXmacs:
               | https://m.youtube.com/watch?v=H46ON2FB30U
        
         | Karrot_Kream wrote:
         | The thing is, symbols like u and l often have well-understood
         | meanings in context, especially when using a particular
         | package. It would be like a programmer complaining about using
         | `i` to iterate through an array or `row` and `col` to iterate
         | through a 2D array.
        
         | abdullahkhalids wrote:
         | Mathematica has many non-latin symbols as well, and it works
         | great. The symbols in my paper are the same symbols in my code.
         | No confusion at all.
        
         | [deleted]
        
         | dragonwriter wrote:
         | > but when writing code that is meant to be maintained and read
         | over and over agains it makes a difference if that u looks too
         | much like a u sometimes.
         | 
         | So, if you are maintaining code where that is the idiom, use a
         | font where that isn't an issue.
        
         | 7thaccount wrote:
         | In certain domains, people will immediately recognize the
         | symbols for delta, lambda, mu...etc. That's how they're written
         | in the textbooks and academic white papers and so on. Seeing
         | the English word is actually more confusing.
        
         | whatshisface wrote:
         | The right name for a variable is related to what you think the
         | way people will check it will be. In most code the attestation
         | to the correctness of the code is the programmer's
         | understanding of the code. Clearly then names should be
         | descriptive. In contrast, when the attestation to the
         | correctness of a formula is that it is the same as a formula
         | printed in a published paper, then it should look as much like
         | the formula in the paper as possible. A programmer who does not
         | know that rho means density will not be able to catch
         | scientific misconceptions by virtue of the fact that they see
         | the word "density." On the other hand, the will be able to
         | catch differences between the symbols in the source material
         | and the source code.
         | 
         | In any code you write, you should be asking, "how will people
         | check this?" If the answer is "by comparing it to something
         | else," then it should resemble the original as much as
         | possible. If the answer is "by thinking about it," then it
         | should be formatted for self-contained comprehensibility. The
         | distinct use cases lead to different best practices.
        
         | moelf wrote:
         | I'm personally very empathetic to this. Fortunately, I think
         | most "real" developers are pretty restraint regarding this in
         | popular packages' code.
         | 
         | Some proper use casees IMHO are: 1. in "terminal" code:
         | scripts, notebooks. 2. function internal variables 3. for
         | making the code look like their counterpart of a paper.
         | 
         | idk what to think of 3. if you look at any paper, non of them
         | only uses ASCII, which raises the question, if we're happy with
         | reading papers (even CS ones) with symbols, why not in our
         | code?
        
           | spicybright wrote:
           | I definitely agree with this. It's just a pain to type out.
           | 
           | Maybe I'm behind on this, but needing to use the mouse to
           | find symbols in a huge menu is very painful.
           | 
           | You can copy and paste too but again, huge break of flow.
           | 
           | Do some editors support discord-style emoji syntax, where
           | typing :fo would bring up a menu of emojis that might match
           | foo. Then hitting enter inserts the emoji over the :foo:
           | representation. You can also not use the auto complete menu.
           | 
           | Ex. :pow2: might turn into 2
        
             | moelf wrote:
             | Julia has the best REPL for this. You can \<type><tab> to
             | type unicode symbol and emoji.
             | 
             | Better yet, you can reverse look up how to type a thing:
             | help?> kh2       "kh2" can be typed by
             | \chi<tab>\^2<tab>\_i<tab>
        
               | dnautics wrote:
               | Yeah. Now go edit your code.
        
               | craftinator wrote:
               | I actually agree with this sentiment, using non-keyboard
               | utf8 characters in code is just asking for massive
               | problems down the line. A fun party trick perhaps, but
               | such a bad idea for anything that isn't a hobby Project.
        
               | dnautics wrote:
               | I disagree, in that it's the opposite. You might want to
               | put some internal calls in Unicode as a way to gatekeep
               | contributors and users to people who are serious about
               | using Julia. But if you think your project could be used
               | by casual explorers, your public API better not have
               | Unicode calls, or, at least provide sensible non-unicode
               | aliases for unicode calls.
        
               | leephillips wrote:
               | I can edit it in Vim with no problem.
        
               | dnautics wrote:
               | Sure, because you have a julia plugin. Not every casual
               | user trying out the language will bother, or want to
               | bother installing someone else's plugin. They'll just hit
               | a function call they need to make with a Unicode
               | character, go WTF man, and give up.
        
               | leephillips wrote:
               | I don't have a Julia plugin. I don't even understand what
               | you think the problem is.
        
               | e12e wrote:
               | You can easily search for omega and lambda via vim
               | slash(/) search command?
        
               | Karrot_Kream wrote:
               | Oh dang I didn't know this. Thanks, TIL!
        
             | vmladenov wrote:
             | If your editor doesn't help, on macOS ^[?]Space brings up
             | the character picker with the search bar focused for the
             | keyboard, so you can just search the name of the symbol and
             | enter.
        
             | adgjlsfhk1 wrote:
             | Yeah. Many editors (VS-code, julia repl, atom), support
             | something very similar where I can type in `\ome` tab
             | complete to `\omega` tab complete to o. (similar
             | completions exist for many other common unicode symbols).
        
             | leephillips wrote:
             | In the Julia REPL you can type LaTeX commands for Greek
             | letters and math symbols, and a <TAB> converts them to the
             | actual character; so \mu<TAB> gets you a m. Or you can just
             | input the Unicode directly. I'm constantly doing such
             | things everywhere, so the extra few keystrokes seem worth
             | it to get code where the math looks a more like math. Using
             | the mouse and a menu or character palette is the worst
             | possible solution.
             | 
             | EDIT: In general it makes sense to set up your keyboard
             | with a compose key and a "dead Greek" key, and set up your
             | Compose table so the shortcuts make sense to you. Then you
             | can use an expanded set of symbols everywhere, including
             | comment boxes like this one. You can even put things like
             | your email address in your Compose table.
        
       | goalieca wrote:
       | Whenever people run benchmarks on numerical code, especially
       | involving square root, I immediately wonder if the generated
       | output is either 'correct' or just approximate.
        
         | galangalalgol wrote:
         | i did notice in the benchmark game they were using fast math
         | with a float and an iteration to get the inverse sqrt. but the
         | c++ and rust examples were doing the same
        
       | actinium226 wrote:
       | Wow, I may have to give Julia a try. I've been holding back
       | because of that classic language issue "it doesn't have the
       | libraries I like" but these performance numbers are impressive
       | and hard to ignore.
        
         | oivey wrote:
         | The need for libraries is a lot different, too. In Python when
         | you need a library what you specifically might need is a high
         | performance Python wrapper around a C library. Just writing
         | your own short thing isn't an option because pure Python is so
         | slow. In Julia that's not the case.
        
         | leephillips wrote:
         | In addition to everything else, Julia makes it easy to call
         | functions from Python, C, and Fortran libraries.
        
       | eirki wrote:
       | Genuine question: Is Julia used for anything other than
       | mathematics yet? Is there any sign of the Julia community
       | expanding beyond science, maths, machine learning etc?
        
         | jason0597 wrote:
         | I have to ask to ask the obvious question: why?
         | 
         | What problems would they be trying to solve?
        
           | nerdponx wrote:
           | Gradually-typed, syntactic macros, tidy syntax, good Unicode
           | support in the standard library. All positive traits for app
           | and/or server development in my opinion.
        
         | nerdponx wrote:
         | Does the Julia runtime have good properties for something like
         | a Web server?
         | 
         | Slow startup makes it questionable for CLI tools, although that
         | is supposedly improving a lot.
         | 
         | But what about high-concurrency "server-like" workloads? The
         | language itself I think would be great for such an application,
         | but I have no idea if the runtime itself would be good.
        
         | m_mueller wrote:
         | Does it have to? IMO it's good to have a language focused on
         | that domain as the tradeoffs tend to be different there, hence
         | why Fortran is still actively used.
        
         | oivey wrote:
         | There's a handful of things in a bunch of different domains,
         | although mathy things are the most well developed. The language
         | is definitely general purpose and suitable in a ton of
         | different domains. It's just going to take time for the
         | ecosystem to build up.
        
         | clarkevans wrote:
         | I use it for general application development. Some of the
         | libraries still need quite a bit of work, but it's getting
         | there. For practical web-service development, you need to hook
         | in Revise.jl to get updates to your request handler without
         | having to restart the server.
        
         | adgjlsfhk1 wrote:
         | Yes, but a lot of it isn't as highly optimized as some of the
         | more mathy stuff. Julia has a ton of really good readers of
         | various file formats (CSV, JSON, Arrow, etc). Many of these are
         | as fast as anything out there, but some of them have some
         | performance issues still. https://www.genieframework.com/ is a
         | web framework entirely in Julia, (but probably not the
         | fastest).
         | 
         | If you have specific things you are interested in, feel free to
         | follow up.
        
         | eigenspace wrote:
         | Yes, people are using it for databases, web servers, video
         | games, and a bunch of other things. These domains are less
         | mature than julia's traditional wheelhouse of numerical /
         | technical computing, but I think eventually it can be very
         | competent at almost anything people put the time into building.
         | 
         | Julia is an incredibly flexible language that's all about
         | enabling powerful code transformations, so it can be adapted to
         | very different niches.
        
       | cycomanic wrote:
       | I find the argument that vectorized code is more difficult to
       | read weird. Having to write out the loops becomes tedious very
       | quickly and does often significantly decrease readibility. Take
       | for example the element wise multiplication of two 2d square
       | arrays and compare that to the multiplication of an array and a
       | transpose. In the loop the only difference is that the indices
       | are flipped, easy to overlook in a quick read.
       | 
       | My biggest annoyance with Julia is however that they decided to
       | follow matlab and do matrix operations by default and even worse
       | use the '.' as the element wise operation modifier. From my own
       | experience and from teaching many students, one of the primary
       | source of issues is getting that wrong. I don't know how many
       | times when debugging some weird matlab issue it was a matter of
       | find the missing '.' they could have chosen any other symbol but
       | instead the opted for the easiest one to overlook.
        
         | eigenspace wrote:
         | IMO, doing matrix operations by default is the mathematically
         | correct thing to do. E.g. having exp(M::Matrix) give anything
         | other than the matrix exponential is insanity. Same with
         | A::Matrix * B::Matrix giving anything other than matrix
         | multiplication.
         | 
         | As far as the '.' for elementwise operations, I don't think
         | it's that hard to miss, but I guess my eyes are trained for it
         | by now.
         | 
         | There's also an @. macro to make an entire expression element
         | wise though if needed, e.g.                   @. h(f(x) + g(y))
         | 
         | is equivalent to                   h.(f.(x) .+ g.(y))
         | 
         | I think this broadcasting machinery is one of the closet parts
         | of julia because you get to choose at the call site if you want
         | the function to apply element-wise or to see the whole array,
         | and it works on any container, with user customizable behaviour
         | for user defined containers, and has all sorts of goodies like
         | automatic loop fusion.
        
           | cycomanic wrote:
           | I would argue that an array is something else than a matrix.
           | It's a design decision that matlab (and julia) made, regard
           | every multi-element variable as a matrix. I disagree with
           | that decision, because for most of the code I'm exposed to I
           | don't want matrix operations and it causes issues, e.g. when
           | I move from a scalar implementation to an array
           | implementation of a function.
        
             | eigenspace wrote:
             | Ah sure, yes we did not need to conflate vectors and
             | matrices with arrays and I sometimes wonder if we made the
             | wrong choice there.
             | 
             | I guess I'd say that then someone would get to own the
             | syntax                  [a, b, c]
             | 
             | it'd either be a vector of a non-mathematical array and
             | whichever choice was made, someone would be unhappy. I
             | think not too much was lost by having them be the same
             | thing, but the ship has sailed anyways.
        
           | bloaf wrote:
           | I personally think a vector library that allowed some form of
           | Einstein notation [1] would be a killer feature for a
           | scientific computing environment.
           | 
           | [1] https://en.wikipedia.org/wiki/Einstein_notation
        
             | eigenspace wrote:
             | Ask and ye shall receive (more than ye bargained for):
             | 
             | https://github.com/mcabbott/Tullio.jl
             | 
             | https://github.com/Jutho/TensorOperations.jl
             | 
             | https://github.com/under-Peter/OMEinsum.jl
             | 
             | Tullio.jl is the most powerful and interesting to me of the
             | above libraries, but they all have cool strengths.
        
           | NegativeLatency wrote:
           | Having zero experience with this I could see myself getting
           | used to the notation. The dot before the addition sign is
           | strange looking to unfamiliar eyes, is it calling + on the
           | result of f.(x)?
        
             | eigenspace wrote:
             | Yeah that's eighth, it's element wise addition between the
             | results of f.(x) and g.(y) with loop fusion.
             | 
             | the syntax for broadcasting a regular function is
             | f.(x)
             | 
             | but the syntax for broadcasting an infix function like +,
             | is                   x .+ y
             | 
             | so the dot goes at the start. I think there were some
             | convincing reasons it ended up this way, but I forget what
             | those reasons are. It's just muscle memory for me at this
             | point. You can also treat any infix function as a regular
             | prefix one by wrapping it in parens, e.g.
             | (+).(x, y)
        
         | moelf wrote:
         | 1. in Julia you can't write "vecotrize-styled" code by using
         | `@.` without manually adding the dots everywhere and the result
         | will be as fast in numpy "classic use case"
         | 
         | 2. Have not-slow for-loops is invaluable precisely because
         | sometimes your workload is not (either too hard, or
         | impossible/bad in terms of RAM usage) suitable for vectorzie-
         | styled code.
        
           | cycomanic wrote:
           | 1. I know and pretty much every line of code I have would
           | need to be preceded with a @. I should also add that julia
           | did not follow the matlab insanity of regarding 1D vectors as
           | arrays, but instead throwing errors helps catch errors before
           | they cause weird issues.
           | 
           | 2. I agree that this is one of the nice things about Julia.
        
             | adgjlsfhk1 wrote:
             | In general, what you want to do instead of having `@.` in
             | front of every line is write scalar functions and `@.`
             | their application. For example, instead of writing ```
             | function f1(A,B,C) tmp = A.+B return tmp1 .* C end
             | 
             | f1(::Vector,::Vector,::Vector) ``` You can write
             | 
             | ``` function f2(a,b,c) tmp = a + b return tmp1 *c end
             | 
             | f2.(::Vector,::Vector,::Vector) ```
        
         | hpcjoe wrote:
         | I think this depends upon the language intrinsic support for
         | vectors/vector operations. I would (generally) prefer to
         | specify operations using a simpler, more concise syntax, and
         | have the compiler map that into the hardware. The problem with
         | this is, most compilers don't do such a good job with that.
         | 
         | I like the Matrix/vector operations by default for the reason
         | that it becomes quite a bit simpler to understand the written
         | code. Expressing things the way Numpy forces you to, means that
         | you have to reason about the interface of your code to Numpy,
         | as well as how Numpy will operate.
         | 
         | For matrix mult, I agree, writing out loops is tedious. This is
         | why things like M = A * B, where M, A, B are all matrices, is
         | far, far better than writing out the loop representation. I
         | agree as well, messing up the index ordering is annoying (and I
         | still do this 30+ years onward with Fortran, C, etc.)
         | 
         | Julia makes this stuff easy to trivial. It makes the interface
         | to using this capability easy to trivial. It makes for good
         | overall performance (I rarely run codes only once).
        
       | wodenokoto wrote:
       | The _i_ used in the index _Mij_ , is not the same _i_ as in the
       | multiplication _ik_?
       | 
       | How does one make that out in the formula?
        
       ___________________________________________________________________
       (page generated 2021-06-20 23:00 UTC)