[HN Gopher] Why Programming Languages Matter [video]
___________________________________________________________________
Why Programming Languages Matter [video]
Author : hazelnut-tree
Score : 20 points
Date : 2023-10-27 20:53 UTC (2 hours ago)
(HTM) web link (www.youtube.com)
(TXT) w3m dump (www.youtube.com)
| Supermancho wrote:
| https://www.youtube.com/watch?v=JqYCt9rTG8 is not available
| anymore.
| gabrielsroka wrote:
| https://youtu.be/JqYCt9rTG8g
|
| Missing the last letter
| jokoon wrote:
| oddly, there are very few job positions to work in things related
| to programming languages.
| anon291 wrote:
| I feel like every major company is hiring for AI compiler
| engineers right now (based on my inbox at least). May not be
| directly related to general 'programming languages', but my
| take, as someone in the industry, is that all the PL people are
| working on this right now.
| synergy20 wrote:
| same here, some companies are hiring a compiler team for AI
| in fact.
|
| I was told Rice and UIUC provide the best compiler program,
| though not necessarily AI related, should be similar though.
| mjfl wrote:
| Very little innovation in programming languages has happened
| regarding new realities at the hardware level especially
| transition from serial to parallel execution.
| nomel wrote:
| Does anyone have any good counterpoints to this? From my,
| naive, perspective, this seems to be relatively true.
|
| I've always assumed that, by now, I would be able to write
| code, in a semi mainstream language, and it would be made
| somewhat parallel, by the compiler. No need for threads, or me
| thinking of it.
|
| There's projects like https://polly.llvm.org, but I guess I
| assumed there would be more progress through the decades.
| tylerhou wrote:
| Take a look at Halide, which can autovectorize and multi-
| thread graphics computations (but does require a restricted
| language).
| nomel wrote:
| I was thinking outside of "embarrassingly parallel" [1]
| type work. :) But, that is fair.
|
| [1] https://en.wikipedia.org/wiki/Embarrassingly_parallel#:
| ~:tex....
| 7734128 wrote:
| Especially when doing things like mapping or list
| comprehension. I'd love to be able to do operations on
| collections in parallel by simply marking my functions as
| pure. Just a simple xs.map {x -> f(x)} combined with "f"
| marked as pure confirmed by the compiler to make the magic
| happen.
| jcranmer wrote:
| Proving legality of transformations in the compiler is
| frequently impossible. Consequently, the main mode of
| implementation has been to essentially think of the problems
| in terms of the user saying that this loop is parallel,
| please make it run in parallel. OpenMP or Rust's rayon crate,
| for example. The other similar innovation has been
| programming SIMD as if each lane were an independent thread,
| which is essentially the model of ispc or CUDA (or #pragma
| omp simd, natch).
|
| The other big impossible task is that most code isn't written
| to be able to take advantage of theoretical
| autoparallelization--you really want data to be in struct-of-
| arrays format, but most code tends to be written in array-of-
| struct format. This means that vectorization cost model (even
| if proven, whether by user assertion or sufficiently smart
| compiler, legal) sees it needs to do a lot of gathers and
| scatters and gives up on a viable path to vectorization
| really quickly.
| bear8642 wrote:
| Array languages, such as APL and others, tend to be easily
| parallisable given primitives tend to focus on intent and how
| to transform data rather than imperative operations.
|
| Some of the SIMD operations feel very reminicent of APL
| primitives
___________________________________________________________________
(page generated 2023-10-27 23:00 UTC)