Post AUUkUtFDr6molQbcDg by bplein@bvp.me
 (DIR) More posts by bplein@bvp.me
 (DIR) Post #AUUkUapOKPQde6NjSS by jerry@infosec.exchange
       2023-04-10T00:18:03Z
       
       0 likes, 0 repeats
       
       100 100 is the number of CPU cores I currently need to keep the Sidekiq queues from getting too backed up on Infosec.exchange. 😅
       
 (DIR) Post #AUUkUewz07sUQnxqqW by vmstan@vmst.io
       2023-04-10T00:19:07Z
       
       0 likes, 0 repeats
       
       @jerry that’s all?
       
 (DIR) Post #AUUkUpgN6E7tiTHEyu by vmstan@vmst.io
       2023-04-10T00:19:53Z
       
       0 likes, 0 repeats
       
       @jerry I’m way over-scaled 😆
       
 (DIR) Post #AUUkUqIIpCT3c71X6W by jerry@infosec.exchange
       2023-04-10T00:20:58Z
       
       0 likes, 0 repeats
       
       @vmstan we’ll see how it does tomorrow.
       
 (DIR) Post #AUUkUqvIUDexZ3Gfsu by jerry@infosec.exchange
       2023-04-10T00:22:44Z
       
       0 likes, 0 repeats
       
       @vmstan I am curious to know how many you have now…
       
 (DIR) Post #AUUkUrhrZeVXzfzSaG by bplein@bvp.me
       2023-04-10T00:28:35Z
       
       0 likes, 0 repeats
       
       @jerry @vmstan How are you segregating your Sidekiq instances? Does each only cover a single queue (default/ingress/push/pull/scheduler/mailer) or do you have some queues covered by multiple instances?
       
 (DIR) Post #AUUkUskNhm7HDmAz9E by jerry@infosec.exchange
       2023-04-10T00:30:30Z
       
       0 likes, 0 repeats
       
       @bplein @vmstan I run multiple queues of each flavor on each system (101 processes to be precise), with the most dedicated to push (10 processes per system), but only one schedule on one system, per the docs
       
 (DIR) Post #AUUkUtFDr6molQbcDg by bplein@bvp.me
       2023-04-10T00:36:44Z
       
       0 likes, 0 repeats
       
       @jerry @vmstan And do any of your instances ever reach zero?What I’m curious about is the oft written-about approach about having one instance (as example) run default, ingress, push, and pull. The next one runs ingress, default, pull and push.  The next runs push, default, ingress and pull. Etc. so that if any instance runs out of things in its first (primary) queue it will start working on the next queue in its list. I run this way but as a single user instance, it’s overkill 😀
       
 (DIR) Post #AUUkUtyF9inb13fZOS by jerry@infosec.exchange
       2023-04-10T00:44:00Z
       
       0 likes, 0 repeats
       
       @bplein @vmstan it stays pretty close to zero all the time, but it took 100 cores/100 processes to get there
       
 (DIR) Post #AUUkUuWd5sIwjhl1zU by vmstan@vmst.io
       2023-04-10T01:35:09Z
       
       0 likes, 0 repeats
       
       @jerry @bplein current configuration. The number of actual threads per process is probably overcommitted but the queues process fast. Total of 6 vCPU. Scheduler has been using a lot of compute power so I put it on its own box just this week.
       
 (DIR) Post #AUUkUuY30bRGo6QACW by bplein@bvp.me
       2023-04-10T00:42:16Z
       
       0 likes, 0 repeats
       
       @jerry @vmstan
       
 (DIR) Post #AUUkUvMjy7zLLK8eDQ by jerry@infosec.exchange
       2023-04-10T02:01:52Z
       
       0 likes, 0 repeats
       
       @vmstan @bplein oh yeah, I had it on a 32 core system and, while it could keep up, at points there were several minute long backlogs
       
 (DIR) Post #AUUkUvrE8mNIrsOzjc by vmstan@vmst.io
       2023-04-10T01:37:45Z
       
       0 likes, 0 repeats
       
       @jerry @bplein wait I just saw that your 100 threads each have their own core?!