[HN Gopher] Provably Space-Efficient Parallel Functional Program...
___________________________________________________________________
Provably Space-Efficient Parallel Functional Programming
Author : matt_d
Score : 36 points
Date : 2022-01-13 19:29 UTC (3 hours ago)
(HTM) web link (blog.sigplan.org)
(TXT) w3m dump (blog.sigplan.org)
| nynx wrote:
| Jeez, scrolling on that website is messed up
| jaquer_1 wrote:
| in what way?
| mgraczyk wrote:
| For me scrolling works fine on mobile but is broken on Chrome
| desktop (page snaps around in ways I don't expect). Just
| disable javascript on blog.sigplan.org, the page works fine
| without it.
| [deleted]
| aristus wrote:
| This sounds a bit like software transactional memory (STM), in
| the sense of giving threads the illusion of total freedom while
| keeping them separate in reality. In STM data races are solved by
| unwinding the _victim_ thread, essentially allowing threads to
| ask for forgiveness instead of permission.
|
| Crucially, this new scheme still allows for mutation of shared
| data structures as long as they were allocated before the
| thread's birth. You can enjoy parallel execution managed by the
| compiler, but maintain your ability to footcannon if you need to,
| and don't have anywhere near the bookkeeping overhead of STM.
| abeppu wrote:
| This seems cool, but this blog article doesn't really delve into
| what the "provably" part is -- I can see how the structure
| they're describing of having processors handle GC for a
| collection of heaps which are structurally linked might make GC
| more efficient, but they don't really talk about what is proved
| or how.
|
| I guess, even from a practical perspective, how does this end up
| comparing with GC in something like the JVM with functional
| languages like scala?
| Jtsummers wrote:
| Near the top there's a link to a paper which seems to go into
| more technical detail: https://dl.acm.org/doi/10.1145/3434299
|
| Context around the link:
|
| > In our POPL 2021 paper (which won a Distinguished Paper
| award), we proposed a specific heap scheduler and showed that
| it yields provable work and space bounds.
|
| I haven't read it yet, but this is where their claims of being
| provably space-efficient seems to come from.
|
| EDIT: A quick read later, section 6 is where they prove their
| claim. It's efficient when compared to equivalent sequential
| execution.
___________________________________________________________________
(page generated 2022-01-13 23:00 UTC)