[HN Gopher] Understanding the Go Scheduler
___________________________________________________________________
Understanding the Go Scheduler
Author : gnabgib
Score : 88 points
Date : 2025-05-18 17:03 UTC (3 days ago)
(HTM) web link (nghiant3223.github.io)
(TXT) w3m dump (nghiant3223.github.io)
| 90s_dev wrote:
| I heard that the scheduler is a huge obstacle to many potential
| optimizations, is that true?
| NAHWheatCracker wrote:
| In some ways, yes. If you want to optimize at that level you
| ought to use another language.
|
| I'm not a low level optimization guy, but I've had occasions
| where I wanted control over which threads my goroutines are
| running on or prioritizing important goroutines. It's a trade
| off for making things less complex, which is standard for Go.
|
| I suppose there's always hope that the Go developers can change
| things.
| silisili wrote:
| You can kinda work around this though. runtime package has a
| LockOSThread that pins a goroutine to its current thread and
| prevents others from using it.
|
| If you model it in a way where you have one goroutine per os
| thread that receives and does work, it gets you close. But in
| many cases that means rearching the entire code base, as it's
| not a style I typically reach for.
| jasonthorsness wrote:
| It's always a sign of good design when something as complex as
| the scheduler described "just works" with the simple abstraction
| of the goroutine. What a great article.
|
| "1/61 of the time, check the global run queue." Stuff like this
| is a little odd; I would have thought this would be a variable
| dependent on the number of physical cores.
| __turbobrew__ wrote:
| Make sure you set GOMAXPROCS when the runtime is cgroup limited.
|
| I once profiled a slow go program running on a node with 168
| cores, but cpu.max was 2 cores for the cgroup. The runtime
| defaults to set GOMAXPROCS to the number of visible cores which
| was 168 in this case. Over half the runtime was the scheduler
| bouncing goroutines between 168 processes despite cpu.max being 2
| CPU.
|
| The JRE is smart enough to figure out if it is running in a
| resource limited cgroup and make sane decisions based upon that,
| but golang has no such thing.
| xyzzy_plugh wrote:
| Relevant proposal to make GOMAXPROCS cgroup-aware:
| https://github.com/golang/go/issues/73193
| yencabulator wrote:
| This should be automatic these days (for the basic scenarios).
|
| https://github.com/golang/go/blob/a1a151496503cafa5e4c672e0e...
| jasonthorsness wrote:
| uh isn't that change 3 hours old?
| yencabulator wrote:
| Oh heh yes it is. I just remembered the original discussion
| from 2019 (https://github.com/golang/go/issues/33803) and
| grepped the source tree for cgroup to see if that got done
| or not, but didn't check _when_ it got done.
|
| As said in 2019, import https://github.com/uber-
| go/automaxprocs to get the functionality ASAP.
| jasonthorsness wrote:
| super-weird coincidence but welcome, I have been waiting
| for this for a long time!
| williamdclt wrote:
| I honestly can't count on my fingers and toes how many
| times something very precisely relevant to me was brought
| up or sorted out hours-to-days before I looked it up. And
| more often than once, by people I personally knew!
|
| Always a weird feeling, it's a small world
| formerly_proven wrote:
| This is probably going to save quadrillions of CPU cycles by
| making an untold number of deployed Go applications a bit
| more CPU efficient. Since Go is the "lingua franca" of
| containers, many ops people assume the Go runtime is
| container-aware - it's not (well not in any released version,
| yet).
|
| If they'd now also make the GC respect memory cgroup limits
| (i.e. automatic GOMEMLIMIT), we'd probably be freeing up a
| couple petabytes of memory across the globe.
|
| Java has been doing these things for a while, even OpenJDK 8
| has had those patches since probably before covid.
| kortex wrote:
| Fantastic writeup! Visualizations are great, the writeup is
| thorough but readable.
___________________________________________________________________
(page generated 2025-05-21 23:00 UTC)