[HN Gopher] Puma 6: Sunflower
       ___________________________________________________________________
        
       Puma 6: Sunflower
        
       Author : thunderbong
       Score  : 68 points
       Date   : 2022-10-22 10:10 UTC (12 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | stevebmark wrote:
       | We tried to throw Puma at Rails a few times. Would not recommend.
       | If you're trying to add threading to a Ruby I would question how
       | safe you really think dynamic languages are. Fortunately
       | Ruby/Puma/Rails have been out of favor for a while, but if you're
       | stuck with them, I'd just accept your 2-4x higher AWS bill for
       | the hardware you need for Rails rather than add dangerous
       | threading abstractions.
        
         | jbverschoor wrote:
         | What a load of nonsense
        
         | dalyons wrote:
         | "We couldn't figure out some basic thread safety stuff that has
         | been easy since rails 3.2 and so everyone else should give up
         | too"
        
       | juanse wrote:
       | Can anyone share thoughts on how rack 3 could improve the future
       | of Ruby/Rails performance?
        
         | xtagon wrote:
         | Rack 3 has better support for web servers like Falcon[0] that
         | are designed to take advantage of async or blocking vs non-
         | blocking IO throughout the stack.
         | 
         | Maybe the same for Puma?
         | 
         | [0]: https://github.com/socketry/falcon
        
           | juanse wrote:
           | Falcon looks cool, and it seems it could facilitates a
           | dreamed "rails deploy" in the future, but substituting C
           | (Nginx) for Ruby should have some type of big performance
           | penalty.
           | 
           | Looking forward, though.
        
         | grncdr wrote:
         | The cases in which the Rack interface has a noticeable impact
         | on a Rails app are so rare that I suspect it won't have any.
        
       | sytse wrote:
       | Regarding wait_for_less_busy_worker on the surface it seems
       | suboptimal to add a wait time before responding. Can someone
       | explain why this is the best solution?
        
         | drewbug01 wrote:
         | I may have this wrong, but here's my best understanding of it:
         | 
         | Ruby supports multi-threading, but unless you're using the new
         | (and experimental) Ractor feature, you're subject to the global
         | interpreter lock in most cases (with a few important and useful
         | exceptions, like some kinds of I/O). That means that Ruby
         | servers will typically employ multi-processing in addition to
         | (or in place of) multi-threading as a way to increase
         | performance and use multiple CPUs - otherwise, multiple threads
         | just end up competing for the global interpreter lock and the
         | additional threads don't increase performance as much as you
         | would hope, especially if serving those requests requires any
         | actual work to be done in Ruby code.
         | 
         | Puma supports a multi-processing mode, where a main Puma
         | process forks multiple workers (each running multiple threads),
         | and each worker listens on the same socket. The linux kernel
         | distributes the load between the workers, and then the workers
         | distribute the load internally between their threads. Since the
         | global interpreter lock is a per-process thing, this is a
         | pretty effective way to get more throughput for a Ruby server.
         | 
         | The problem is that you can't directly control how the kernel
         | is going to balance incoming requests across the multiple
         | workers listening on a socket. Because Ruby _does_ support some
         | instances where threads can run concurrently - like network I
         | /O - it's possible that the kernel may end up handing off
         | multiple requests to one worker process when there were others
         | that were idle and could have handled the request. Doesn't
         | sound like a big deal - but because _most_ threaded Ruby
         | operations do _not_ run concurrently that means that the actual
         | Ruby code that needs to _process_ that request is going to be
         | competing for the global interpreter lock.
         | 
         | So basically this allows a worker process that is already
         | handling requests to insert a tiny delay before accepting
         | another one - which gives an idle worker process a chance to
         | accept it instead. On balance, this means that you'll get
         | higher utilization of the CPU resources available to you and
         | will often result in a lower average latency for all requests.
         | 
         | The PR that added this in Puma 5.0 is here:
         | https://github.com/puma/puma/pull/2079
        
           | sytse wrote:
           | Thanks so much for the clear write up and link! Cool to see
           | it is my fellow GotLab team member Kamil who added this.
        
       | johnklos wrote:
       | I have no clue what Puma is in this context, but I really think
       | people should do a quick search before they pick names,
       | particularly with a number, and especially when a name and a
       | number have a horrible history:
       | 
       | https://www.theregister.com/2017/04/11/intel_puma_6_arris/
       | 
       | It also wouldn't hurt if they main page for a project had just a
       | wee bit more information about what it actually is and what it
       | actually does.
        
         | nickjj wrote:
         | > I have no clue what Puma is in this context, but I really
         | think people should do a quick search before they pick names
         | 
         | To be fair Puma has been out and named for over 10 years. It's
         | one of the most popular app servers for Ruby.
         | 
         | The HN post links to the 6.0 upgrade guide, the repo's readme
         | file goes into more detail on what it is on the first line:
         | 
         |  _Puma is a simple, fast, multi-threaded, and highly parallel
         | HTTP 1.1 server for Ruby /Rack applications._
        
       | slindsey wrote:
       | Puma: A Ruby Web Server Built For Parallelism
        
         | [deleted]
        
         | Elias-Braun wrote:
         | Thank you. I thought it was another of image generation AI.
        
           | doubled112 wrote:
           | And I had the Intel Puma modem chipsets come to mind.
           | 
           | Names are hard.
        
             | pmontra wrote:
             | Very hard. Add AMD's Puma micro architecture to the list
             | (2014).
             | 
             | Intel's Puma processors were first released in 2012. Ruby's
             | Puma is from 2011.
        
       ___________________________________________________________________
       (page generated 2022-10-22 23:01 UTC)