[HN Gopher] Memcached vs. Redis - More Different Than You Would ...
___________________________________________________________________
Memcached vs. Redis - More Different Than You Would Expect
Author : kelseyfrog
Score : 102 points
Date : 2021-10-11 16:22 UTC (6 hours ago)
(HTM) web link (engineering.kablamo.com.au)
(TXT) w3m dump (engineering.kablamo.com.au)
| germandiago wrote:
| This article is great and has good insights. I never used these
| technologies but it is a good overview of the caching part.
| JensRantil wrote:
| The article states
|
| > Memcached slabs once assigned never change their size.
|
| This is incorrect. Modern versions of Memcached has support for
| this. See for example [1].
|
| [1] https://www.loginradius.com/blog/async/memcach-memory-
| manage...
| adventured wrote:
| Redis is probably my favorite software of the past decade. I
| abuse it liberally and it always takes it with resounding uptime
| and performance. In the stack that I utilize it's always among
| the things I have to worry the least about.
| lgl wrote:
| Agreed, Redis is truly wonderful, I've used it extensively as
| both a cache lookalike and also as primary datasource (as long
| as your data size/growth is known and manageable on RAM, which
| is pretty common for many projects and current hosts) and it's
| always super fast, reliable and easy to work with.
| leetrout wrote:
| Redis + nginx (openresty).
|
| Lua in both places. So (ab)useful!
| fmajid wrote:
| I switched from Memcached to Redis 10 years ago because I had a
| situation where I needed to cache lookups of (x, y) but needed to
| invalidate (x, *), something that is trivial with Redis hashes,
| but extremely difficult in memcached.
| Andys wrote:
| Worth adding that KeyDB forked redis and added multithreading
| support, so that gives another option for scaling up redis.
| dormando wrote:
| Looks like this article got one bit of updated information but
| missed everything else... I'll address some things point by
| point:
|
| data structures: - yes, fewer datastructures, if any. The point
| isn't that they have the features or not, but that memcached is a
| distributed system _first_, so any feature has to make sense in
| that context.
|
| "Redis is better supported, updated more often. (or maybe
| memcached is "finished" or has a narrower scope?)" - I've been
| cutting monthly releases for like 5 years now (mind the pandemic
| gap). Sigh.
|
| Memory organization: This is mostly accurate but missing some
| major points. The _sizes_ of the slab classes doesn 't change,
| but slabs pages can and do get re-assigned automatically. If you
| assign all memory to the 1MB page class, then empty that class,
| memory will go back to a global pool to get re-assigned. There
| are edge cases but it isn't static and hasn't been for ten years.
|
| Item size limit: The max slab size has actually been 512k
| internally for a long time now, despite the item limit being 1mb.
| Why? Because "large" items are stitched together from smaller
| slab chunks. Setting a 2mb or 10mb limit is fine in most use
| cases, but again there are edge cases, especially for very small
| memory limits. Usually large items aren't combined with small
| memory limits.
|
| You can also _reduce the slab class overhead_ (which doesn't
| typically exceed 5-10%), by lowering the "slab_chunk_max" option,
| which puts the slab classes closer together at the expense of
| stitching items larger than this class. IE; if all of your
| objects are 16kb or less, you can freely set this limit to 16kb
| and reduce your slab class overhead. I'd love to make this
| automatic or at least reduce the defaults.
|
| LRU: looks like the author did notice the blog post
| (https://memcached.org/blog/modern-lru/) - I'll add that the LRU
| bumping (mutex contention) is completely removed from the _access
| path_. This is why it scales to 48 threads. The LRU crawler is
| not necessary to expire items, there is also a specific thread
| that does the LRU balancing.
|
| The LRU crawler is used to proactively expire items. It is highly
| efficient since it independently scans slab classes; the more
| memory an object uses the fewer neighbors it has, and it
| schedules when to run on each slab class, so it can "Focus" on
| areas with higest return.
|
| Most of the thread scalability is pretty old; not just since
| 2020.
|
| Also worth noting memcached has an efficient flash backed storage
| system: https://memcached.org/blog/nvm-caching/ - requires RAM to
| keep track of keys, but can put value data on disk. With this
| tradeoff we can use flash devices without burning them out, as
| non-get/non-set operations do not touch the SSD (ie; delete
| removes from memory, but doesn't cause a write). Many very huge
| installations of this exist.
|
| I've also been working on an internal proxy which is nearing
| production-readiness for an early featureset:
| https://github.com/memcached/memcached/issues/827 - scriptable in
| lua, will have lots of useful features.
| lykr0n wrote:
| A very solid article giving a very good technical answer to the
| question of which should you pick.
| tayo42 wrote:
| If redis wasn't single threaded and had a way to do memory
| management like memcached it would probably be perfect.
| _hyn3 wrote:
| I quite like Redis as a cache or as an ad hoc locking mechanism
| accessible from across a cluster, but as a database I will never
| fully trust it. RDB and AOF seem far too fragile to trust in
| production, especially in a distributed setting.
| echelon wrote:
| > RDB and AOF seem far too fragile to trust in production,
| especially in a distributed setting.
|
| Create a replica chain and have your secondaries performing AOF
| writes. Page on failure.
|
| Do an AOF rewrite or RDB dump at a regular cadence, offline,
| where the forking won't cause you latency.
| thinkingfish wrote:
| Disclosure: I'm maintainer/author of Pelikan.
|
| I wrote about Pelikan, a unified cache framework, and its
| relationship to Memcached and Redis in this blog from 2019:
| https://twitter.github.io/pelikan/2019/why-pelikan.html
|
| Specifically talked about what we would like to change about
| each.
___________________________________________________________________
(page generated 2021-10-11 23:00 UTC)