[HN Gopher] Show HN: I built a Ruby gem that handles memoization...
       ___________________________________________________________________
        
       Show HN: I built a Ruby gem that handles memoization with a ttl
        
       I built a Ruby gem for memoization with TTL + LRU cache. It's
       thread-safe, and has been helpful in my own apps. Would love to get
       some feedback: https://github.com/mishalzaman/memo_ttl
        
       Author : hp_hovercraft84
       Score  : 44 points
       Date   : 2025-04-22 16:51 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | locofocos wrote:
       | Can you pitch me on why I would want to use this, instead of
       | Rails.cache.fetch (which supports TTL) powered by redis (with the
       | "allkeys-lru" config option)?
        
         | film42 wrote:
         | Redis is great for caching a customer config that's hit 2000
         | times/second by your services, but even then, an in-mem cache
         | with short TTL would make redis more tolerant to failure. This
         | would be great for the in-mem part.
        
         | thomascountz wrote:
         | I'm not OP nor have I read through all the code, but this gem
         | has no external dependencies and runs in a single process (as
         | does activesupport::Cache::MemoryStore). Could be a "why you
         | should," or a "why you should not" use this gem, depending on
         | your use case.
        
         | hp_hovercraft84 wrote:
         | Good question. I built this gem because I needed a few things
         | that Rails.cache (and Redis) didn't quite fit:
         | 
         | - Local and zero-dependency. It caches per object in memory, so
         | no Redis setup, no serialization, no network latency. -Isolated
         | and self-managed. Caches aren't global. Each object/method
         | manages its own LRU + TTL lifecycle and can be cleared with
         | instance helpers. - Easy to use -- You just declare the method,
         | set the TTL and max size, and you're done. No key names, no
         | block wrapping, no external config.
        
           | JamesSwift wrote:
           | For what its worth, ActiveSupport::CacheStore is a really
           | flexible api that gives minimal contractual obligations
           | (read_entry, write_entry, delete_entry is the entire set of
           | required methods), but still allows you to layer specific
           | functionality (eg TTL) on top with an optional 'options'
           | param. You could get the best of both worlds by adhering to
           | that contract and then people can swap in eg redis cache
           | store if they wanted a network-shared store.
           | 
           | EDIT: see https://github.com/rails/rails/blob/main/activesupp
           | ort/lib/a...
        
       | qrush wrote:
       | Congrats on shipping your first gem!!
       | 
       | I found this pretty easy to read through. I'd suggest setting a
       | description on the repo too so it's easy to find.
       | 
       | https://github.com/mishalzaman/memo_ttl/blob/main/lib/memo_t...
        
         | hp_hovercraft84 wrote:
         | As in identify where the source code is in the README?
        
           | zerocrates wrote:
           | I think they mean just set a description for the repo in
           | github (set using the gear icon next to "About"), saying what
           | the project is. That description text can come up in github
           | searches and google searches.
        
       | film42 wrote:
       | Nice! In rails I end up using Rails.cache most of the time
       | because it's always "right there" but I like how you break out
       | the cache to be a per-method to minimize contention. Depending on
       | your workload it might make sense to use a ReadWrite lock instead
       | of a Monitor.
       | 
       | Only suggestion is to not wrap the error of the caller in your
       | memo wrapper.
       | 
       | > raise MemoTTL::Error, "Failed to execute memoized method
       | '#{method_name}': #{e.message}"
       | 
       | It doesn't look like you need to catch this for any operational
       | or state tracking reason so IMO you should not catch and wrap.
       | When errors are wrapped with a string like this (and caught/ re-
       | raised) you lose the original stacktrace which make debugging
       | challenging. Especially when your error is like, "pg condition
       | failed for select" and you can't see where it failed in the
       | driver.
        
         | hp_hovercraft84 wrote:
         | Thanks for the feedback! That's a very good point, I'll update
         | the gem and let it bubble up.
        
         | JamesSwift wrote:
         | I thought ruby would auto-wrap the original exception as long
         | as you are raising from a rescue block (i.e. as long as $! is
         | non-nil). So in that case you can just                 raise
         | "Failed to execute memoized method '#{method_name}'"
         | 
         | And ruby will set `cause` for you
         | 
         | https://pablofernandez.tech/2014/02/05/wrapped-exceptions-in...
        
       | gurgeous wrote:
       | This is neat, thanks for posting. I am using memo_wise in my
       | current project (TableTennis) in part because it allows
       | memoization of module functions. This is a requirement for my
       | library.
       | 
       | Anyway, I ended up with a hack like this, which works fine but
       | didn't feel great.                  def some_method(arg)
       | @_memo_wise[__method__].tap { _1.clear if _1.length > 100 }
       | ...        end        memo_wise :some_method
        
       | JamesSwift wrote:
       | Looks good. Id suggest making your `get` wait to acquire the lock
       | until needed. eg instead of                 @lock.synchronize do
       | entry = @store[key]         return nil unless entry
       | ...
       | 
       | you can do                 entry = @store[key]       return nil
       | unless entry            @lock.synchronize do         entry =
       | @store[key]
       | 
       | And similarly for other codepaths
        
         | chowells wrote:
         | Does the memory model guarantee that double-check locking will
         | be correct? I don't actually know for ruby.
        
           | JamesSwift wrote:
           | I think it wouldnt even be a consideration on this since we
           | arent initializing the store here only accessing the key. And
           | theres already the check-then-set race condition in that
           | scenario so I think it is doubly fine.
        
       | wood-porch wrote:
       | Will this correctly retrieve 0 values? AFAIK 0 is falsey in Ruby
       | 
       | ``` return nil unless entry ```
        
         | chowells wrote:
         | No, Ruby is more strict than that. Only nil and false are
         | falsely.
        
           | wood-porch wrote:
           | Doesn't that shift the problem to caching false then :D
        
             | RangerScience wrote:
             | you can probably always just do something like:
             | def no_items?         !items.present?       end
             | def items         # something lone       end
             | memoize :items, ttl: 60, max_size: 10`
             | 
             | just makes sure the expensive operation results in a truthy
             | value, then add some sugar for the falsey value, done.
        
       | madsohm wrote:
       | Since using `def` to create a method returns a symbol with the
       | method name, you can do something like this too:
       | memoize def expensive_calculation(arg)         @calculation_count
       | += 1         arg * 2       end, ttl: 10, max_size: 2
       | memoize def nil_returning_method         @calculation_count += 1
       | nil       end
        
       | deedubaya wrote:
       | See https://github.com/huntresslabs/ttl_memoizeable for an
       | alternative implementation.
       | 
       | For those who don't understand why you might want something like
       | this: if you're doing high enough throughput where eventual
       | consistency is effectively the same as atomic consistency and IO
       | hurts (i.e. redis calls) you may want to cache in memory with
       | something like this.
       | 
       | My implementation above was born out of the need to adjust global
       | state on-the-fly in a system processing hundreds of thousands of
       | requests per second.
        
       ___________________________________________________________________
       (page generated 2025-04-22 23:01 UTC)