[HN Gopher] Redis server side if-modified-since caching pattern ...
___________________________________________________________________
Redis server side if-modified-since caching pattern using lua
Author : r4um
Score : 52 points
Date : 2021-06-22 06:38 UTC (16 hours ago)
(HTM) web link (blog.r4um.net)
(TXT) w3m dump (blog.r4um.net)
| irthomasthomas wrote:
| You can do something similar using the Expire TTL by setting a
| date way in the future. redis-cli SETEX mykey
| 310536000000 "helllooo" myttl = $(redis-cli TTL mykey)
| mdate = 310536000000 - $myttl
| derefr wrote:
| Only if you have distinct keys. Most Redis use I've seen "at
| scale" has involved putting things in hashes/sets/lists.
|
| (I've wondered for a while now whether it'd be possible to
| implement expiry in Redis for sub-key data, without introducing
| too much overhead to data structures that don't need expiry.
| I'd think it'd be "just" a matter of having an alternative data
| representation that holds expiries for each element in the data
| structure, and then re-encoding the data structure over to that
| data representation when at least one element has an expiry --
| because if one does, then it's likely that many will.)
| bnajdecki wrote:
| Really well-explained article.
| habibur wrote:
| Also note that HTTP had it since early days.
|
| https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If...
|
| And Apache and other web servers support it out of the box.
|
| If your use case allows it consider ditching Redis and rather
| static cache your generated HTML/json files on the file system
| while serving. And let Apache serve from there as long as those
| cached files exist. You get all these features for free.
| lf-non wrote:
| As an extension, for use cases where this is not feasible (eg.
| you have clustering, non-trivial permission checks or multi-
| entity payloads etc.) it is quite convenient to have nginx with
| openresty talk to redis.
| derefr wrote:
| I would note that while Nginx itself doesn't natively support
| Redis (and so you have to script things yourself in that
| case), it _does_ have full native support for memcached (or
| rather, the memcached wire protocol, spoken by several
| backends, though sadly Redis isn't one of those.)
|
| See
| http://nginx.org/en/docs/http/ngx_http_memcached_module.html
| for details.
|
| Basically, you can hand off to memcached as if handing off to
| an upstream, and then if memcached doesn't have the key,
| continue on trying to resolve the response through your
| actual app-server upstream. (This is essentially the same
| thing you're suggesting doing through scripting, to be clear;
| it's just easier to implement for the memcached case.)
|
| This type of middleware-level "read-aside" caching is really
| neat, because it enables _push-based_ response caching:
| rather than having a read-through cache that expires items
| out of itself, where an expired item potentially leads to a
| thundering herd of clients all trying to grab the expired
| item and so generating independent requests to your backend
| (that can hopefully be coalesced into a single recomputation,
| but that's unlikely in a distributed environment), you can
| instead just keep the cache always-populated, and every once
| in a while -- perhaps on a schedule, perhaps in response to
| discovering new primary-source data -- recompute your
| denormalized /report resource, and atomically overwrite what
| was in the cache with the new data.
___________________________________________________________________
(page generated 2021-06-22 23:02 UTC)