[HN Gopher] Show HN: Pocache, preemptive optimistic caching for Go
       ___________________________________________________________________
        
       Show HN: Pocache, preemptive optimistic caching for Go
        
       Hey all, I recently published this Go package, and would like to
       _show off_ as well as get feedback!
        
       Author : bnkamalesh
       Score  : 80 points
       Date   : 2024-10-11 13:21 UTC (9 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | pluto_modadic wrote:
       | so.... if I initially got the key "foo" at time T=00:00:00, this
       | library would re-query the backing system until time T=00:00:60?
       | even if I requery it at T=:01? vs... being a write-through cache?
       | I guess you're expecting other entries in the DB to go around the
       | cache and update behind your back.
       | 
       | if you are on that threshold window, why not a period where the
       | stale period is okay? T0-60 seconds, use the first query (don't
       | retrigger a query) T60-120 seconds, use the first query but
       | trigger a single DB query and use the new result. repeat until
       | the key is stale for 600 seconds.
       | 
       | that is, a minimum of 2 queries (the first preemptive one at 60
       | seconds, (in the cache for 10 minutes total)
       | 
       | and a maximum of 11 queries (over 10 minutes) (the initial one
       | that entered the key, and if people ask for it once a minute, a
       | preemptive one at the end of those minutes, for 20 minutes total
       | in the cache).
        
         | zimpenfish wrote:
         | > if I initially got the key "foo" at time T=00:00:00, this
         | library would re-query the backing system until time
         | T=00:00:60? even if I requery it at T=:01?
         | 
         | From what I understood of the README (10 minute expiry, 1
         | minute window) only cache requests between 09:00 to 09:59 will
         | trigger a pre-emptive backing fetch.
         | 
         | ie. T0-539 seconds uses the first query (no re-fetch), T540-599
         | does a pre-emptive re-fetch (as long as no-one else is
         | currently doing that), T600- would do a fetch and start the
         | whole timer again.
        
           | bnkamalesh wrote:
           | @zimpenfish yes you are right. refetch is initiated on the
           | first Get between 9-10mins, and the timer is reset as soon as
           | the back fetch is successful
        
             | NovaX wrote:
             | One optimization for background refresh is coalescing the
             | individual reloads into a batch operation based on a
             | time/space window. Here is how we do it in the Java world.
             | [1]
             | 
             | [1] https://github.com/ben-
             | manes/caffeine/tree/master/examples/c...
        
               | mh- wrote:
               | Thank you for your OSS work! I used Caffeine many years
               | ago.
        
       | tbiehn wrote:
       | Interesting idea - do you handle 'dead keys' as well? Let's say
       | you optimistically re-fetch a few times, but no client re-
       | requests?
        
         | bnkamalesh wrote:
         | since the underlying storage is an LRU, I just ignored dead
         | keys. Nope there's no client re-request or retries. That is
         | left upto the "updater" function
        
       | serialx wrote:
       | PSA: You can also use singleflight[1] to solve the problem. This
       | prevents the thundering herd problem. Pocache is an
       | interesting/alternative way to solve thundering herd indeed!
       | 
       | [1]: https://pkg.go.dev/golang.org/x/sync/singleflight
        
         | bnkamalesh wrote:
         | interesting, thanks for that. I'll check it out
        
         | kbolino wrote:
         | I'm confused by the decision in DoChan to return a channel
         | (instead of accepting one supplied by the caller) and then,
         | given that, also _not_ to close that channel (is something else
         | going to be sent to the channel in the future?). Both seem like
         | strange /unnecessary design decisions.
        
           | neild wrote:
           | Returning a channel avoids questions of what happens if
           | sending to a caller-supplied channel blocks. DoChan returns a
           | channel with a single-element buffer, so a single send to the
           | channel will always succeed without blocking, even if the
           | caller has lost interest in the result and discarded the
           | channel.
           | 
           | DoChan doesn't close the channel because there isn't any
           | reason to do so.
        
             | kbolino wrote:
             | A non-blocking send would work just as well for that issue,
             | is a standard part of the language, and would support user-
             | supplied channels, but it would still be at risk of
             | panicking when sending to a closed channel. I think there
             | ought to be a safe way to send to a closed channel, but the
             | language authors disagree, so that's not really on the
             | library authors (though they could still recover from the
             | panic).
             | 
             | However, not closing the channel _you specifically chose to
             | control all sending to_ is just lazy /rude. Even though the
             | caller _should_ receive from the channel once and then
             | forget about it, closing the channel after sending would
             | prevent incorrect subsequent receives from hanging forever.
             | 
             | All this having been said, contributing to these libraries
             | seems better than complaining about them, but I don't know
             | how the golang.org/x stuff is maintained; looks like this
             | one is here: https://github.com/golang/sync
        
               | neild wrote:
               | A non-blocking send doesn't work in this case. Consider:
               | User provides DoChan an unbuffered channel, and then
               | reads a value from it. If the send is nonblocking and
               | occurs before the user reads from the channel, the value
               | is lost.
        
       | bww wrote:
       | You may be interested in Groupcache's method for filling caches,
       | it solves the same problem that I believe this project is aimed
       | at.
       | 
       | Groupcache has a similar goal of limiting the number of fetches
       | required to fill a cache key to one--regardless of the number of
       | concurrent requests for that key--but it doesn't try to
       | speculatively fetch data, it just coordinates fetching so that
       | all the routines attempting to query the same key make one fetch
       | between them and share the same result.
       | 
       | https://github.com/golang/groupcache?tab=readme-ov-file#load...
        
         | Thaxll wrote:
         | It's using singleflight which was later on added to the Go std
         | lib:
         | 
         | https://github.com/golang/groupcache/tree/master/singlefligh...
         | 
         | https://pkg.go.dev/golang.org/x/sync/singleflight
        
           | ecnahc515 wrote:
           | Just a note that `x/sync` is not part of the Go std lib.
        
         | iudqnolq wrote:
         | Is groupcache suitable for current use? I don't see commits in
         | years and the issues have reports of panics due to bugs.
        
           | stock_toaster wrote:
           | Indeed. It also looks like there is a maintained fork[1], but
           | no clue with regards to the quality.
           | 
           | [1]: https://github.com/golang/groupcache/issues/158#issuecom
           | ment...
        
       | sakshamhhf wrote:
       | Not work
        
         | bnkamalesh wrote:
         | ??
        
       | indulona wrote:
       | I have implemented my own SIEVE cache, with TTL support. It
       | solves all these issues and requires no background workers.
       | Author, or anyone else interested in this, should read the SIEVE
       | paper/website and implement their own.
        
         | bnkamalesh wrote:
         | interesting, will check out
        
       ___________________________________________________________________
       (page generated 2024-10-11 23:01 UTC)