[HN Gopher] Concurrency in Julia
___________________________________________________________________
Concurrency in Julia
Author : leephillips
Score : 114 points
Date : 2021-11-09 18:09 UTC (4 hours ago)
(HTM) web link (lwn.net)
(TXT) w3m dump (lwn.net)
| cbkeller wrote:
| The Folds.jl package [1] mentioned in the article is very nicely
| written, IMHO.
|
| For another alternative to Julia's built-in `Threads.@threads`
| macro, folks may also be interested in checking out `@batch` from
| Polyester.jl [2] (formerly CheapThreads.jl), which features
| particularly low-overhead threading.
|
| [1] https://github.com/JuliaFolds/Folds.jl
|
| [2] https://github.com/JuliaSIMD/Polyester.jl
| voorwerpjes wrote:
| For a job interview I did the same parallelization of a`isPrime`
| in Rust, and it the basically as simple as the one for Julia,
| replacing a call of `into_iter()` with `into_par_iter()`. I also
| show the strong scaling relationship, which i wish all things
| talking about concurrency did. Link for the curious:
| https://github.com/pcrumley/parallel_primes_rs
| alhirzel wrote:
| I really like how simply Julia unifies many computational
| concepts. For example, MPI is a classic solution to some of the
| same problems, but it is only for coarse parallelism. OS-specific
| threading libraries are available for fine-grained parallelism,
| but the ligature between the two is typically messy.
|
| I have heard about Go's parallelism being awesome, does anyone
| know of a comparison between Go's and Julia's models?
| gbrown wrote:
| I've been playing with GPGPU on Julia lately also, and it
| really seems like things have come a long way in the last few
| years. Check out the Juliacon 2021 talk on GPU compute if
| you're interested.
| e12e wrote:
| This one?
|
| https://live.juliacon.org/talk/P8KJSW
| vchuravy wrote:
| Probably more this talk about CUDA 3.0
| https://live.juliacon.org/talk/UGX8YR or the workshop
| https://www.youtube.com/watch?v=Hz9IMJuW5hU
| alhirzel wrote:
| Are AMD and NVidia GPUs on par nowadays? Or is it still an
| NVidia-first world when it comes to compute support.
| eigenspace wrote:
| There was actually a question about this earlier today:
| https://discourse.julialang.org/t/amdgpu-jl-status/71191
|
| TLDR: The entire GPUArrays.jl test suite now passes with
| AMDGPU.jl. There are still some missing features and it is
| not as mature as the nvidia version, but this space is
| progressing rapidly, and benefited from the generic GPU
| compilation pipeline that was initially built for CUDA.jl
| Sukera wrote:
| According to this[1], Julias threading/task model is in part
| inspired by Go. I don't know about any detailed analysis on
| similarities and/or differences though.
|
| [1] https://julialang.org/blog/2019/07/multithreading/
| adgjlsfhk1 wrote:
| It's probably worth mentioning that Julia has MPI.jl which
| creates an array type that works over MPI transparently to the
| user.
| masklinn wrote:
| > I have heard about Go's parallelism being awesome, does
| anyone know of a comparison between Go's and Julia's models?
|
| There's nothing fundamentally special to Go's "parallelism".
|
| It uses near-preempted (and now actually preempted I think,
| since a few versions back? before that they'd have implicit
| yield points at function calls but without function calls you
| could lock out the scheduler) userland threads with small
| stacks.
|
| The parallelism comes from the m:n scheduling. Go has no
| constructs for massive parallelisation, where you hand it a
| loop and it efficiently runs that on a hundred cores, at least
| not built in.
| adgjlsfhk1 wrote:
| I don't think there's anything too special with Go's
| constructs, it's just that they are very well optimized and
| widely used.
| initplus wrote:
| Having lightweight userland threads fundamentally changes the
| way that you write concurrent programs. There is no need to
| worry about creating too many threads or of thread creation
| being too heavy. Old school patterns like
| process/thread/goroutine per connection are very natural to
| write in Go and result in dead simple code that performs well
| under load.
|
| Reasoning about threads & basic blocking calls is a simpler
| mental model compared to async/await style concurrency in my
| opinion.
|
| Go is fundamentally designed for developing backend
| applications, not high perf mathematical/scientific
| computing. Features like unrolling for-loops across hundreds
| of cores or offloading to GPU's would be out of place in the
| language.
| dragontamer wrote:
| > Having lightweight userland threads fundamentally changes
| the way that you write concurrent programs. There is no
| need to worry about creating too many threads or of thread
| creation being too heavy. Old school patterns like
| process/thread/goroutine per connection are very natural to
| write in Go and result in dead simple code that performs
| well under load.
|
| Well... there's still a problem in the tens-of-millions of
| Goroutines.
|
| But the issue is that OS-threads generally had issues at
| the ~100,000 of OS-threads. Making something "more
| lightweight" than pthreads makes sense, because 10,000,000
| coroutines is a fundamentally different program design than
| 100,000 pthreads.
| skybrian wrote:
| This article uses very inefficient mathematical computations to
| illustrate concurrency. That's fine, they are just examples. But
| you might also be interested in learning how it's really done,
| for example by reading the Primes package. [1]
|
| [1]
| https://github.com/JuliaMath/Primes.jl/blob/master/src/Prime...
| leephillips wrote:
| Yeah, as I wrote there, "As with all of the examples in the
| article, isprime() leaves many potential optimizations on the
| table in the interests of simplicity."
|
| The source you link to is longer than my entire article. But it
| is good to see an example of a "real" program.
| WhatIsDukkha wrote:
| The care you took in showing the usefulness of the task
| migration was appreciated, thanks.
___________________________________________________________________
(page generated 2021-11-09 23:00 UTC)