[HN Gopher] Transparent memory offloading: more memory at a frac...
___________________________________________________________________
Transparent memory offloading: more memory at a fraction of the
cost and power
Author : mfiguiere
Score : 28 points
Date : 2022-06-20 19:44 UTC (3 hours ago)
(HTM) web link (engineering.fb.com)
(TXT) w3m dump (engineering.fb.com)
| throwaway81523 wrote:
| Nice, they have reinvented virtual memory and paging to disk.
| woleium wrote:
| or "Meta re-invents swap disk"
| jhgg wrote:
| You should read the whole article, because TMO does differ from
| traditional swapping. Notably, swapping usually occurs during
| periods of extreme memory pressure, where as TMO will offload
| memory much sooner and more intelligently.
| trhway wrote:
| >Using a compressed-memory back end, TMO saves 7 percent to12
| percent of resident memory across five applications. Multiple
| applications' data have poor compressibility, such that
| offloading to an SSD proves far more effective.
|
| that in general sounds strange. My experience across different
| application types over more than 2 decades has been very
| different (not about SSD, i mean the compression in memory have
| been always a very efficient approach).
|
| > Specifically, machine learning models used for Ads prediction
| commonly use quantized byte-encoded values that exhibit a
| compression ratio of 1.3-1.4x.
|
| yep, i haven't used compression for that. That though calls for a
| question - how many of those models do FB have in memory. Any
| chance FB instantiates such a model for every user/request? :)
| MarkSweep wrote:
| Computer storage hierarchies are sure getting complex with CXL
| and NVM in the mix.
|
| https://en.wikipedia.org/wiki/Compute_Express_Link
___________________________________________________________________
(page generated 2022-06-20 23:00 UTC)