[HN Gopher] Multi-Array Queue
       ___________________________________________________________________
        
       Multi-Array Queue
        
       Author : vitpro2213
       Score  : 50 points
       Date   : 2024-05-31 16:01 UTC (1 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | bigdubs wrote:
       | Many ring-buffer implementations grow the backing storage array
       | transparently on enqueue but do so in place, discarding the old
       | arrays; what's the advantage of keeping the previous arrays?
       | Naively I'd say it would reduce GC churn because you wouldn't
       | have to free the old arrays, but I'm curious what the impact of
       | that is in benchmarks.
       | 
       | Separately; the simulator is cool and very helpful!
        
         | vitpro2213 wrote:
         | If you discard the old array (and allocate a bigger one
         | before), you must also copy all enqueued material.
         | 
         | Also this one enqueue will be mega expensive - a clear "fat
         | tail" in the latency histogram.
         | 
         | In MultiArrayQueue you keep all already enqueued material in-
         | place, "just" allocate the bigger array, register the new
         | diversion, enqueue the one new element - and done.
         | 
         | Thanks
        
         | hansvm wrote:
         | When I wrote it [0], the goal was semi-bounded [1] latency.
         | 
         | [0] It needs a refactor, but here's some version of the idea.
         | https://github.com/hmusgrave/zcirc
         | 
         | [1] You're not _really_ bounded since you have blocking calls
         | to the underlying allocator (usually somewhere close to the OS,
         | and even that can do more work than you might expect when you
         | ask for a contiguous array), but it's still much cheaper than a
         | bulk copy.
        
       | pphysch wrote:
       | > Do not send me Pull Requests - the code is small so I want to
       | maintain it single-handedly.
       | 
       | Interesting contribution policy. Makes sense in lieu of producing
       | a strict style guide, I suppose.
        
         | lionkor wrote:
         | I thought github let you turn off PRs on a repo -- the author
         | may wanna do that.
        
           | tidwall wrote:
           | PRs cannot be disabled.
        
             | kubanczyk wrote:
             | ...without archiving the repo.
             | 
             | Archived repos however have that ugly yellow warning.
        
           | debugnik wrote:
           | No, you can't disable the PRs tab; you can disable any tab
           | except Code and PRs. Torvalds famously complained about this
           | because he handles pull requests through email, so all PRs on
           | his Linux repo are useless.
        
         | splix wrote:
         | While the author probably wants to own the main code, I'm sure
         | there are other things he may want to get help from external
         | contributors.
         | 
         | For example unit tests. At least to show that it works, and how
         | to use it. Also a build config, like gradle/maven, so others
         | would be able to use this lib.
        
       | cyansmoker wrote:
       | Hi @vitpro2213 it's very interesting (at least to me) to find
       | about this data structure a few months after I had a need for its
       | somewhat distant cousin: https://github.com/Fusion/slotmachine
       | 
       | In my case, I needed a way to book and release two-ports tuples
       | really fast to accommodate a RTP simulator. So, I wrote that
       | slotmachine data structure and have been running in in production
       | for months and can confirm: yes, performance is good.
       | 
       | Note: I should mention that my approach is almost exactly
       | opposite to yours: I create a final backing slice, then create
       | the traversal slices.
        
       ___________________________________________________________________
       (page generated 2024-06-01 23:00 UTC)