[HN Gopher] DDRamDisk: RAM disk, a disk based on RAM memory chips
       ___________________________________________________________________
        
       DDRamDisk: RAM disk, a disk based on RAM memory chips
        
       Author : thunderbong
       Score  : 127 points
       Date   : 2023-03-17 09:50 UTC (13 hours ago)
        
 (HTM) web link (ddramdisk.store)
 (TXT) w3m dump (ddramdisk.store)
        
       | jeffbee wrote:
       | Pick and place machines hate him!
        
       | GordonS wrote:
       | I don't have a use for this, but I enjoyed the detailed write-up!
       | 
       | In these days of fast SSDs, are there still uses for a RAM disk,
       | beyond extreme niches?
        
         | jeroenhd wrote:
         | SSDs have wear which will lead them to eventual failure. Wear
         | land nearly as bad as a few years back, but you can still only
         | write to a cell for only a limited amount of times. If you're
         | constantly writing data to your disks, you may need something
         | that doesn't die.
         | 
         | I would personally go with a "normal" RAM disk in this case,
         | but CPUs only have a limited amount of RAM and memory channels
         | available. Complex operations on RAM disks may also increase
         | the load on the CPU which can be a performance downside if
         | you're doing things like compiling large code bases. Coupled
         | with a battery backup, this looks like a pretty neat solution
         | to SSDs for write-heavy operations, assuming you persist the
         | important data periodically on something else (such as a hard
         | drive).
         | 
         | I'd be wary of bit flips running this card, though. Without
         | ECC, bitflips in RAM are just something you should be
         | expecting. Normal RAM doesn't operate with the same data for an
         | entire year, but this semi-permanent setup may be more
         | vulnerable to bitflips.
         | 
         | I know RAID cards will often contain a battery backed RAM cache
         | for file operations in case the power goes out, perhaps this
         | card can be useful for that as well? With ZFS you can set up
         | all kinds of fancy buffering/cacheing and I imagine an SSD
         | write cache would show wear and tear much faster than one of
         | these cards, and you can't exactly hot swap M.2 cards. A couple
         | of cheap gigabytes of persistent write cache may just be the
         | solution some people have been looking for.
        
           | remlov wrote:
           | ECC is supported, see the following comment:
           | https://ddramdisk.store/2023/01/19/the-ddr4-pcie-x8-lady-
           | has...
        
             | chunk_waffle wrote:
             | exactly what I was wondering, thanks!
        
             | jeroenhd wrote:
             | That's very good. I'm not sure how the drive would signal
             | bitflips that can't be corrected (and what the operating
             | system will do when it happens), but at least the support
             | is there!
        
               | rsaxvc wrote:
               | The same way NVMe does, as a failed read?
        
         | Felger wrote:
         | A very useful use case discovered no later than last week :
         | local dedup management by Synology C2 Backup Agent and TBW on
         | the OS SSD.
         | 
         | C2 Backup agent stores dedup/chunks data by default in
         | ProgramData, which is stored on C:... which is usually a SSD
         | nowadays.
         | 
         | I noticed a 3:4 ratio between written data in local dedup
         | folder vs uploaded data volume on the remote C2 5 TB storage
         | (suscribed a C2 Business).
         | 
         | TBW grew indeed horrifyingly fast on the SSD, and I estimated
         | it would completely wear it in about a year or so, with the 2
         | TB and growing data to backup with my standard retention
         | scheme.
         | 
         | So I made a 32 GB (16 GB was not enough peak size) lmDisk
         | ramdisk with backup/restore at shutdown/startup (it _is_
         | featured by lmDisk and quite nicely), mounted in place of the
         | dedup folder, and ran my tasks.
         | 
         |  _poof_ , reduced TBW on SSD by 99%.
         | 
         | (4x16 GB DDR4 ECC Reg on my server, so not concerned about
         | memory errors)
        
           | Dylan16807 wrote:
           | I think the question was more tuned to physical ram disks,
           | but I'm not sure.
           | 
           | Either way, how many terabytes were being written each day?
           | And how much can your drive take? It looks like I could go
           | pay $60 right now for 600TB of endurance, and $35 for 200TB
           | of endurance. If you already have the extra ram than go for
           | it but it doesn't seem like a setup to make on purpose.
           | 
           | Maybe your backup system has far more writes than mine? I
           | have terabytes of backups but the average written for each
           | daily backup is about 10GB.
        
             | Felger wrote:
             | (I was answering to the previous comment wondering if
             | ramdisks still have any interesting usage nowadays.)
             | 
             | About 150 TBW endurance on a 250 GB Samsung M.2 NVMe Evo
             | 970+. On _paper_ that is, but since it is the OS SSD with
             | Windows Server 2022 STD (sole DC AD /DHCP/DNS/Hyper-V), in
             | production, I won't take any risk. RAID 1 in this scenario
             | would have changed nothing. On the side, I have 40 TB
             | RAID10 for storage.
             | 
             | So I cancelled the first C2 exec (with C2 encrypt on) when
             | I reached 195 TBW on the SSD. Monitoring the ramdisk use
             | still shows about 3:4 ratio on complete snapshot.
             | 
             | I have about 1 million files on the 2,29 TB data to backup.
             | 
             | I had indeed the RAM sticks available for free, simply had
             | to take them (2x16 GB ) from a decomissioned ML350 Gen9
             | (which uses DDR4-2133P ECC Reg). It now serves me as a
             | bench, _litterally_.
        
               | Dylan16807 wrote:
               | Unless that server replaces a quarter of those files
               | every day, the lesson I'm getting here is "Don't use C2
               | Backup agent".
        
         | JonChesterfield wrote:
         | I use one for building C++. Granted that's a bit niche, but a
         | tmpfs filesystem over the build directory keeps the load off
         | the SSD. Haven't actually checked it's still faster for a while
         | but it certainly used to be. Have been doing that for five
         | years or so.
        
           | simplotek wrote:
           | > I use one for building C++. Granted that's a bit niche
           | (...)
           | 
           | Not niche at all. Using a RAM disk for building C++
           | applications is one of the oldest and most basic build
           | optimization tricks around. It's specially relevant when
           | using build cache tools like cache, which lead large C++
           | builds to no longer be CPU bound and become IO bound.
        
         | crest wrote:
         | While local NVMe SSD raids can max out a PCIe 16x slot with
         | given large enough blocks and enough queue depth they still
         | can't keep up with small to medium sync writes writes unless
         | you can keep a deep queue filled. Lots of transaction
         | processing workloads require low latency commits which is where
         | flash-backed DRAM can shine. DRAM doesn't requires neither wear
         | leveling nor UNMAP/TRIM. If the power fails you use stored
         | energy to dump the DRAM to flash. On startup you wait for the
         | stored energy level to come to safe operating level while
         | restoring the content from flash, once enough energy is stored
         | you erase enough NAND flash to quickly write a full dump. At
         | this point the device is ready for use. If you overprovision
         | the flash by at least a factor of two you can hide the erase
         | latency and keep the previous snapshot. Additional
         | optimisations e.g. using chunked or indexable compression can
         | reduce the wear on the NAND flash effectively using the flash
         | like a simplified flat compressed log structured file system. I
         | would like such two such cards each of my servers as ZFS intent
         | log please. If their price and capacity are reasonable enough I
         | would like to use them either as L2ARC or for a special
         | allocation class VDEV reserved for metadata and maybe even
         | small block storage for PostgreSQL databases.
        
         | 2000UltraDeluxe wrote:
         | For anything that requires temporary but really fast storage,
         | RAM disks is still a thing. The number of valid use cases have
         | gone down since SSD's became the norm, but there's still
         | situations where disk i/o or the fear of wearing out an SSD are
         | valid concerns.
        
         | Dalewyn wrote:
         | SSDs currently peak somewhere around 7GB/s transfer speeds,
         | while RAM can easily knock out well over 20GB/s (and that's a
         | low estimate). So anything that benefits from fast transfer
         | speeds and/or low latency will appreciate a RAM disk.
         | 
         | SSDs are also consumable, as mentioned in other comments, so
         | RAM disks are perfect for a scratch disk. HDDs can also serve
         | as a scratch disk, but some tasks also appreciate the
         | aforementioned faster transfer speeds and/or lower latency of
         | SSDs or RAM.
        
           | GordonS wrote:
           | > So anything that benefits from fast transfer speeds and/or
           | low latency will appreciate a RAM disk
           | 
           | Well, anything that doesn't require persistence!
        
             | chronogram wrote:
             | All the products like this that I've seen carry a battery
             | for persistence during power outages.
        
             | nunobrito wrote:
             | Define persistency.
        
           | mxfh wrote:
           | You can easily get to about 20 GB/s by using PCI-E 4.0 NVMe
           | in striped 4x configurations. Comparing this 16x setup to
           | single lane SSD access is not a fitting comparison. With
           | prices for NVME finally going down, you can get 8TB at those
           | speeds for under USD 1k.
        
         | rwmj wrote:
         | We used a network-backed temporary RAM disk for use in our
         | RISC-V package build system. Each time a build is started it
         | connected to the NBD server which automatically created a RAM
         | disk ("remote tmpfs"). On disconnection the RAM disk was thrown
         | away. Which is fine for builders, I wouldn't much recommend it
         | for anything else! https://rwmj.wordpress.com/2020/03/21/new-
         | nbdkit-remote-tmpf...
        
       | sandworm101 wrote:
       | Not a new concept. Has been done a few times before. The use
       | cases are very small, and getting smaller these days as
       | motherboards accept larger and larger memory modules.
       | 
       | https://www.newegg.ca/gigabyte-gc-ramdisk-others/p/N82E16815...
        
         | zmxz wrote:
         | [flagged]
        
         | nubinetwork wrote:
         | Even older... https://silentpcreview.com/review-blast-off-with-
         | cenateks-ro...
        
           | rwmj wrote:
           | The concept is old as the hills, if you include static RAM.
           | It was a common way for early digital synthesizers (early
           | 80s) to store patches in battery backed SRAM. Even on
           | removable cards.
        
             | semi-extrinsic wrote:
             | The original GameBoy cartridges often used battery backed
             | SRAM for savegames.
        
             | nubinetwork wrote:
             | I don't personally, because battery backed SRAM is a poor
             | man's eeprom.
        
             | dboreham wrote:
             | My first encounter with the concept was in 1982 during the
             | summer working at an electronics company. We used the then
             | new 64K DRAMs in production so there was a ready supply of
             | devices vendors had given us for evaluation. I built a
             | 128KB memory board with bank selection logic then wrote a
             | driver for the Flex OS (6809) to make that memory appear
             | like a disk. Also built a similar board with (probably,
             | long time ago..) 27128 eproms that worked as a "ROM-Disk".
             | I doubt I invented the ram disk. I'd probably heard of the
             | idea somewhere, but I hadn't actually seen an
             | implementation before the one I made.
             | 
             | The next year I switched to a larger company for summer
             | work (defense contractor) and there wrote a driver for CP/M
             | that talked to a "server" I wrote running on their VAX (via
             | 19.2K serial). It made a large file on the VAX look like a
             | disk to CP/M. We used this arrangement for backup -- copy
             | all the files from the physical hard drive to the remote
             | drive. Again I don't believe I'd seen this done before but
             | it was a fairly obvious extension of the previous ram disk
             | idea.
             | 
             | Unfortunately it turned out the group I worked for got
             | billed by the computer dept for CPU time and I/O on the VAX
             | and my network attached storage scheme ran up a huge bill
             | so had to be abandoned.
        
               | rwmj wrote:
               | If you like FlexOS you'll like this Youtube channel:
               | https://www.youtube.com/user/deramp5113/videos
               | 
               | As for me it's a bit before my time. I started out
               | writing device drivers for Microware OS-9 on 68k.
        
               | jonsen wrote:
               | > I built a 128KB memory board with bank selection logic
               | then wrote a driver for the Flex OS (6809) to make that
               | memory appear like a disk.
               | 
               | Oy! I could have written exactly that. What a coincidence
               | :-) I also wouldn't claim invention. Maybe I heard of it,
               | but I'd say the idea is pretty obvious. We made it
               | possible for Flex to run to programs simultaneously and
               | also do print spooling. (That's when I bought an HP 16C.
               | Great help. Still have it.)
        
             | neilv wrote:
             | Also some early general-purpose microcomputer products in
             | early '80s:
             | 
             | http://s100computers.com/Hardware%20Folder/SemiDisk/History
             | /...
             | 
             | http://s100computers.com/Hardware%20Folder/Electralogics/Hi
             | s...
        
       | johndunne wrote:
       | Fascinating read. The read/write performances are impressive. But
       | the whole time I was reading the article I kept thinking...
       | Imagine the performance of a ddr4... no, ddr5 ram!
       | 
       | I'd love to get my hands on one of these and try out a pxe booted
       | os.
        
         | bitwize wrote:
         | Imagine a Beowulf cluster supported by these!
        
         | unnouinceput wrote:
         | You have DDR4 memory or DDR5 memory as your normal RAM? Then
         | you can make a RAMDisk using any number of software available
         | on internet. And then you can test said drive using AS SSD
         | Benchmark (same as the article's author).
        
         | jeroenhd wrote:
         | There's a blog post about their DDR4 version from last month.
         | Sustained read and write speeds of 15GB/s for sequential
         | operations, with about 3GB/s for random I/O seem to be the
         | expected throughput.
         | 
         | I don't know what loads demand such high persistent
         | throughputs, but that's one place SSDs still can't compete, as
         | performance quickly drops when their DRAM cache fills up.
         | 
         | Still, NVMe drives to up to 10GB/s these days, I think we're
         | close to reaching a point where the PCIe overhead towards these
         | RAM drives will soon make them unable to compete with
         | persistent storage for performance. Preventing wear will be the
         | only reason to go for these RAM drives.
         | 
         | If you want to try to experience the performance of a RAM based
         | PCIe computer, there's very little preventing you from
         | dedicating a portion or RAM to a RAM disk, copying your boot
         | image to that, and running directly from RAM. Several Linux
         | recovery images are built to do exactly that, in fact. If you
         | want to run Windows or something else that doesn't have such
         | functionality out of the box, I imagine using a lightweight
         | Linux VM to bootstrap (and forward) your OS peripherals may
         | solve that problem for you as well.
        
       | PragmaticPulp wrote:
       | Interesting design. They use FPGAs to emulate NAND storage with
       | DDR, then use a standard NAND SSD controller.
       | 
       | It doesn't perform any better than fast NVMe SSDs for larger,
       | sequential operations. However, it appears to be an order of
       | magnitude faster for the random 1K read/write operations.
       | 
       | It also has infinite durability relative to an SSD, though
       | obviously your data isn't infinitely durable during a power
       | outage scenario. Would be helpful to know how long that battery
       | can keep it backed up.
        
         | krab wrote:
         | > Would be helpful to know how long that battery can keep it
         | backed up.
         | 
         | They say: It has a built-in LiPo and stores your data for up to
         | a year.
        
         | sandworm101 wrote:
         | >> infinite durability relative to an SSD
         | 
         | Until the FPGA dies. Consumer-grade arrays are not eternal.
         | Many have lifespans (>5% failure rate) as low as 2-5 years
         | under regular use.
        
           | pkaye wrote:
           | Is the failure rate higher than other kind of chips? And if
           | so what is the reason that happens?
        
           | numpad0 wrote:
           | Perhaps durability can be clarified as infinite write
           | endurance, which it has.
        
         | mittermayr wrote:
         | Would be interesting if there would be a case for slapping an
         | SSD on the back of it and giving it just enough capacitor power
         | to dump whatever is on RAM to the SSD once a power-out happens
         | (and restore upon boot). SSD writes would be super low (rarely)
         | and only happen during power issues or shutting it all down.
        
         | cryptonector wrote:
         | But it must suffer from rowhammer.
        
           | omoikane wrote:
           | I was wondering about that too, maybe they are relying on ECC
           | memory to patch those over. The main page doesn't say, but in
           | one of the comments here it says ECC memory is supported.
           | 
           | https://ddramdisk.store/2023/01/19/the-ddr4-pcie-x8-lady-
           | has...
        
       | thefz wrote:
       | Insane, I love it. And I want one.
        
       | cronix wrote:
       | I remember doing something similar in windows 3.1 back around
       | 90-91. Hard disks weren't quite fast enough to play very good
       | video (I was using 3D Studio back then and IIRC this was still
       | owned by Autodesk then as a spinoff product of Autocad) so you
       | made a ramdisk and played the video from that. Only a few seconds
       | at 640x480x256(colors) though. I think I had 4 Megs of ram in
       | that 486 machine.
        
         | TonyBagODonuts wrote:
         | I believe it was Discrete Logic out of Chicago that made 3d
         | studio before the tragic buyout by autodesk.
        
       | notyourday wrote:
       | Around 1995(?) Erol's Internet used a static RAM based ram-drive
       | device to process email for its tens (hundreds?) of thousands of
       | users. Its larger brother was used to handle Usenet.
       | Unfortunately the Usenet feed was growing like crazy and soon
       | that large drive could not handle it.
       | 
       | ... In 2010 some slightly nutty young engineers who heard about
       | that story from the grey beards they worked with at a future very
       | well known company on a very large mysql instance used a monster
       | ramdisk as a single master to achieve a crazy boost in
       | performance. The hard data persistence was achieved via
       | replication to the regular spinning rust slaves. While it worked
       | really well for their application no one ever battle tested bad
       | crashes in production...
       | 
       | ... that led to a product around 2013(?)-2014(?) from Violin
       | Memory which combined the ramdisk with if I recall correctly
       | spinning disks to dump the data in case of a power loss. The
       | devices were absolutely amazing but did not create a foot hold in
       | the market. I think they sold a few hundred units total. The
       | product was abandoned in favor of flash arrays
        
         | nebula8804 wrote:
         | OMG Erol's internet was my first ISP as a kid in elementary
         | school here in NJ. It provided my first experiences into the
         | web and I remember it fondly because of that.
         | 
         | One of the first tech based mistakes I ever made was to
         | convince my parents to switch from Erols(which had decent ping
         | times on online gaming) to AOL (which had horrendously bad ping
         | times) all because I thought I was missing out on the exclusive
         | content that AOL provided. I do recall fun memories living in
         | AOL's walled garden but giving up that ping time was
         | horrendously bad. I once ripped out the phone wire from the
         | jack in extreme frustration (first time tech made me angry
         | lol!)
         | 
         | We eventually switched to Earthlink(and then I think Juno?)
         | once the AOL 1 year contract was up. Excellent ping times but
         | man Erols will always have that spot in my memories.
         | 
         | I miss all the excitement and innovation happening back then. I
         | wish we still had mom and pop stores providing things like
         | internet services. Even startups today don't feel like they
         | could be done as simple "mom n pop" enterprises although im
         | sure there are plenty hiding in places we dont often look.
        
       | peter_d_sherman wrote:
       | I like the idea of using multiple FPGAs to
       | ["fanout"/"cascade"/"distribute"/"one-to-many proxy" -- choose
       | the terminology you like best] the SM2262EN to multiple sticks of
       | DDR3 RAM...
       | 
       | I'd be curious though, if the SM2262EN itself couldn't be
       | replaced by yet another FPGA, and, if so, if the FPGA used for
       | that purpose could be the exact same type as the other four...
       | 
       | If so -- then one could sort of think of that arrangement as sort
       | of like a Tree Data Structure -- that is 2 levels deep...
       | 
       | But what would happen if we could make it 3 or more levels deep?
       | 
       | ?
       | 
       | In other words, if we had 4 such boards and we wanted to chain
       | them -- then we'd need another central memory controller (another
       | FPGA ideally) -- to act as the central hub in that hierarchy...
       | 
       | It would be interesting, I think, to think of a future hardware
       | architecture which allows theoretically infinite upscaling via
       | adding more nested sub-levels/sub-components/"sub-trees" (subject
       | to space and power and max signal path lengths and other physical
       | constraints, of course...)
       | 
       | I also like the idea of an FPGA proxy between a memory controller
       | and RAM... (what other possibilities could emerge from this?)
       | 
       | Anyway, an interesting device!
        
         | mastax wrote:
         | You can implement an SSD controller in an FPGA. That's how all
         | the early server SSDs were implemented. I think my Fusion
         | ioScale was one of them.
         | 
         | It's just an enormous amount of effort. This already looks like
         | a huge amount of engineering to do for what must be a very
         | niche product.
        
       | vertnerd wrote:
       | I worked on a PC/DOS based instrument for testing permanent
       | magnets back in the 80s. Because of all the magnets involved, we
       | used a battery-backed Static RAM disk instead of a conventional
       | hard drive. The "disk" consisted of a cartridge that plugged into
       | an ISA expansion card in the PC. It was crazy expensive if I
       | remember correctly. One of my contributions to the project was
       | demonstrating that a conventional hard drive inside a steel
       | computer case was actually quite invulnerable to the stray
       | magnetic fields that we were working with. We could forgo the
       | extra cost of the SRAM disk.
       | 
       | We also used a crazy expensive plasma display for the monitor,
       | which turned out to be overkill. But that's a story for another
       | time.
        
       | kjs3 wrote:
       | Nice respin on an old theme. Reminds me of the Sun Prestoserve
       | card:
       | https://docs.oracle.com/cd/E19620-01/805-4448/6j47cnj0t/inde...
        
       | qwerty456127 wrote:
       | I wonder why this is not a commonly available and used thing.
       | 
       | I have always wanted to use more RAM chips than my
       | CPUs/motherboards would support, put all the swap on RAM chips,
       | probably also load the whole OS&apps system drive into RAM this
       | way (hint for your board: add an SSD it would load from, single-
       | time on turn-on) and only use a persistent storage drive for my
       | actual data files.
       | 
       | Using more RAM instead of HDD/SSD always felt like a thing
       | producing really great return in performance on investment in
       | money as RAM is relatively cheap and really fast. The amount of
       | RAM you were allowed to plug into a PC motherboard always felt
       | like the most annoying limitation.
        
         | [deleted]
        
         | fulafel wrote:
         | How much do you need? I think you can put 8 TB now.
         | 
         | (And of course you get a lot better bandwidth and latency than
         | hanging it off some IO attachment)
        
         | zitterbewegung wrote:
         | You can do something similar (without a LiPo) in software from
         | Linux https://linuxhint.com/create-ramdisk-linux/
         | 
         | The limitations would be the system would need to have a
         | battery backup.
        
         | PragmaticPulp wrote:
         | > I have always wanted to use more RAM chips than my
         | CPUs/motherboards would support, put all the swap on RAM chips,
         | probably also load the whole OS&apps system drive into RAM this
         | way (hint for your board: add an SSD it would load from,
         | single-time on turn-on) and only use a persistent storage drive
         | for my actual data files.
         | 
         | You could create a RAM disk post-boot and then copy apps into
         | it or use it for a working directory.
         | 
         | But you'll be disappointed to discover that virtually nothing
         | benefits from this compared to a modern SSD. Copying files will
         | be faster, but that's about it.
         | 
         | Operating systems are already very good at caching data to RAM.
         | Modern SSDs are fast enough to not be the bottleneck in most
         | operations, from app loading to common productivity tasks.
         | 
         | Even when we all had slower HDDs in our systems, creating a ram
         | disk wasn't a big enough improvement to warrant creating a ram
         | disk for most tasks. I remember reading a lot of experiments
         | where people made RAM disks to try to speed up their
         | development workflows, only to discover that it made no
         | different because storage wasn't the bottleneck.
        
         | matja wrote:
         | > The amount of RAM you were allowed to plug into a PC
         | motherboard always felt like the most annoying limitation
         | 
         | You could always get motherboards that took more RAM, just not
         | ones that taking your typical gaming CPUs and RGB. Currently
         | there are standard desktop-sized ATX motherboards that take 3TB
         | of DDR5, and ATX boards that take 1TB of DDR4 have existed for
         | years.
        
           | numpad0 wrote:
           | *if money is not an issue. High end desktop/server platforms
           | often require less cost efficient Registered or Load-Reduced
           | RAM.
        
         | lopkeny12ko wrote:
         | Why would you want to put swap on a physical disk that is
         | effectively RAM? That seems like a very redundant solution
         | since swap as a concept becomes irrelevant if both main memory
         | and swap are both volatile and equally fast. At that point,
         | just add more main memory. The kernel is designed explicitly
         | under the assumption that the storage backing swap is orders of
         | magnitude slower.
        
           | orev wrote:
           | They clearly state the reason in the last sentence:
           | 
           | > The amount of RAM you were allowed to plug into a PC
           | motherboard always felt like the most annoying limitation.
        
         | antisthenes wrote:
         | Because of the price of buying this DDR4 RAM expansion disk
         | for, say, 1TB capacity, it would be cheaper just to buy a
         | proper server board that has 16+ RAM slots and run the RAM-Disk
         | in software?
        
       | pbalcer wrote:
       | Astera Labs have a commercial version of this that works through
       | CXL (assuming you have a platform that supports it :-)), meaning
       | you can actually have load/store access to the memory instead of
       | using the block interface.
       | 
       | https://www.asteralabs.com/applications/memory-expansion/
        
       | hyperific wrote:
       | Where I work we handle massive nd2 time series images often
       | reaching hundreds of GB. From image capture at the microscope to
       | segmentation and some post processing steps the biggest
       | bottleneck for us is disk speed. I'd be very interested to see
       | how fast our pipeline is with one of these at our disposal.
        
         | PragmaticPulp wrote:
         | > From image capture at the microscope to segmentation and some
         | post processing steps the biggest bottleneck for us is disk
         | speed.
         | 
         | If you're doing sequential writes, this drive benchmarks
         | slightly slower than the fastest PCIe 4 NVMe drives on the
         | market.
         | 
         | Upcoming PCIe 5 NVMe drives will be significantly faster than
         | this.
         | 
         | This unit is really only helpful for extremely small random
         | writes or if you're doing so much writing that you exhaust the
         | wear capacity of a normal SSD.
        
         | dsr_ wrote:
         | If your processing is full of random I/O, this would be the
         | right tool.
        
       | [deleted]
        
       | jaclaz wrote:
       | Only a few days ago there was a related Ask HN:
       | 
       | https://news.ycombinator.com/item?id=35109259
        
       | prmph wrote:
       | A bit off-topic:
       | 
       | What they are doing may be very cool, but the language on the
       | page does not inspire as much confidence and a sense of
       | professionalism as it should.
       | 
       | I assume English is not their first language, so it would be good
       | for them to get a good copy editor to fix the weird expressions
       | and grammar errors in the article.
       | 
       | This is in the spirit of constructive criticism, and it matters,
       | because I had a harder time parsing some of their explanation as
       | a result of the language use.
       | 
       | Edit: Explanation of rationale of this comment; Removal of a
       | personal experience
        
       | gigel82 wrote:
       | I have a growing stack of RAM chips but also M.2 SSDs and SATA
       | SSDs of varying capacity as I retire and upgrade old machines. It
       | feels so wasteful to not have anything to use them for.
       | 
       | I wouldn't take a precious M.2 SSD slot on my main machine for
       | 5-year old 1Tb drive, but I'd love to chuck 8 or 10 of them in
       | some enclosure and build a nice performant NAS out of them. Alas,
       | no such thing exists (just now some ARM SBCs are getting M.2
       | support but only PCIe 3.0 x1).
        
         | PragmaticPulp wrote:
         | > I wouldn't take a precious M.2 SSD slot on my main machine
         | for 5-year old 1Tb drive, but I'd love to chuck 8 or 10 of them
         | in some enclosure and build a nice performant NAS out of them.
         | Alas, no such thing exists
         | 
         | NVMe M.2 drives can go into PCIe slots with an adapter.
         | 
         | If your motherboard supports bifurcation, you even put 4 x M.2
         | drives into a single x16 slot:
         | https://www.amazon.com/Adapter-4x32Gbps-Individual-Indicator...
         | 
         | It wouldn't be difficult to find an older server motherboard
         | with bifurcation support that could take 8 x M.2 drives with
         | the right, cheap adapters. You'd have to read the manuals very
         | carefully though.
         | 
         | The limit is the number of PCIe channels. ARM boards rarely
         | have more than a couple lanes. You really need server-grade
         | chips with a lot of I/O.
         | 
         | Or get one of the new cards with a PCIe Switch to connect 21
         | M.2 drives: https://www.apexstoragedesign.com/apexstoragex21
        
       | fmajid wrote:
       | Back when we used something called the DDRdrive X1 as the ZFS ZIL
       | drive (essentially a write log) on our high-performance database
       | machines. It's a PCIe card with 4GB of RAM, 4GB of SLC flash and
       | a supercap so that in the event of a power failure the RAM is
       | written out to flash.
       | 
       | https://ddrdrive.com/menu1.html
        
         | kjs3 wrote:
         | ZFS was a use case that immediately came to mind.
        
       | MayeulC wrote:
       | I've really wanted something like that where I could just drop my
       | old RAM sticks and use it as swap space.
       | 
       | It would be much better than flash-based solutions, both latency
       | and endurance-wise, probably even over an USB link (3.1 and above
       | speeds are pretty decent, even 3.0 would be enough for basic
       | swap).
       | 
       | Bonus points for a DIMM slot that just accepts any generation
       | (DDR2,3,4, not sure if that would be mechanically possible?). I
       | retired some DDR4 sticks for higher frequency ones, but the 8GB
       | DDR4-2400 stick I have in the drawer would be quite welcome as
       | swap space on the 4GB soldered RAM laptop I am using...
       | 
       | I may have a go at it myself, I don't think the controllers would
       | be too complex if writing a custom kernel driver, and targetting
       | USB speeds.
        
         | kjs3 wrote:
         | It wouldn't be mechanically possible to support multiple
         | generations[1]. That's not the only issue with mixing
         | generations.
         | 
         | There are also some interesting electrical engineering problems
         | around driving the bus as you add more SIMMs; probably need
         | multiple CPLD/FPGA memory controllers beyond a certain point.
         | Clocking gets interesting as well. Not impossible, just
         | complicated for an amateur; I know I have problems getting
         | thing to work reliably over 33MHz or so.
         | 
         | [1] https://www.simmtester.com/News/PublicationArticle/168
        
       | MuffinFlavored wrote:
       | > This kind of disk is not able to retain data after the power is
       | turned off (unless a supporting battery is used), but has an
       | exceptionally high read/write speed (especially for random
       | access) and an unlimited lifespan
       | 
       | What are some good use cases for this?
        
       | compressedgas wrote:
       | A comment here reminded me of jumbomem
       | https://ccsweb.lanl.gov/~pakin/software/jumbomem/
        
       | loxias wrote:
       | If only Intel hadn't cancelled Pmem. Insanely high speed, density
       | to match NVMe, could have changed the way we use computers (or at
       | least powered some killer DB servers)
        
         | inasio wrote:
         | Are you talking about Optane/3D-XPoint? The physics behind it
         | seemed insane to me, amazing that they got it to work. I heard
         | that the NVME protocol was originally designed with it in mind
        
           | loxias wrote:
           | Yeah, that stuff. Was recently discontinued, Micron pulled
           | out, there's been some articles about why. Eventually I guess
           | we'll have CXL, which might catch on, but then there's the
           | delay for software support. It's a shame so much of computing
           | is locked in to the "local minima" of current architecture
           | it's difficult to break out into a new area of the search
           | space.
           | 
           | It would be cool to play with a computer with persistent
           | storage at the center, surrounded by a ring of compute, for
           | instance.
           | 
           | And weren't we supposed to have memristors by now? ;)
        
       | arnejenssen wrote:
       | When doing my PhD around 2004 I was running simulations with
       | Fortran programs to do optimizations. A genetic algorithm would
       | call the Fortran program and change the parameters by creating
       | input files and reading output files.
       | 
       | I found out that disk access was the bottleneck of the
       | optimisations. So I used a RAM disk software to create a disk-
       | drive in RAM. It increased the simulation speed by orders of
       | magnitude.
        
         | arnejenssen wrote:
         | Some years later (2015?) I tried to speed up the build of
         | JavaScript projects by moving my dev-folder to a RAM-disk, but
         | it didn't really move the needle. So disk-IO is not the
         | limiting factor when building.
        
           | semi-extrinsic wrote:
           | Or your OS started caching the filesystem more agressively to
           | RAM, so any piece of code that does heavy IO is effectively
           | put on a RAM-disk automatically.
        
         | erickhill wrote:
         | Yes, I have something called a RAMLink, which plugged into the
         | back of an "ancient" Commodore 64 or 128. The RAMLink is
         | expandable up to 16MB of RAM. Keep in mind the computers had 64
         | or 128K.
         | 
         | Anyway, the RAMLink was powered but you could also get a
         | battery backup for it (using a sealed lead-acid battery, like a
         | miniature one used in a car). I could move an operating system
         | called GEOS over to the RAMLink and watch it boot in less than
         | 20 seconds, where it usually took 1.5 minutes to read off disks
         | and eventually load. I could then move programs (word
         | processing, graphics creation, terminal programs - you name it)
         | over to the RAMLink and open and use them in 1-2 seconds max.
         | 
         | This is from 1990 technology, running on computers from the
         | mid-80s. RAM Drives/Disks are awesome.
        
           | kevin_thibedeau wrote:
           | The Apple IIgs had RAM disk support built in. It was an
           | immense help to eliminate floppy access if you had more than
           | 1MB of RAM which few programs could take advantage of.
        
       | myself248 wrote:
       | Any info on where these folks are headquartered? Their store is
       | remarkable in its omission of any such info.
        
       | da768 wrote:
       | Having this in M.2 format would make an awesome swap drive for a
       | bunch of devices out there with undersized soldered RAM.
        
         | AnthonBerg wrote:
         | Just to mention it: There exist M.2 to PCI-Express adapters,
         | and also Thunderbolt to PCIe (and Thunderbolt to M.2 to PCIe.)
         | 
         | Some examples here: https://egpu.io/best-egpu-buyers-guide/
         | 
         | (M.2-to-PCIe are under "M.2/NGFF" in the table I think.)
        
       | bargle0 wrote:
       | Didn't UUnet do something like this 25-30 years ago for Usenet
       | indices, or something like that?
        
       | mecklyuii wrote:
       | Just bought 128gb ddr5 memory in a consumer board.
       | 
       | And 2x 2tb NVMS.
       | 
       | That system is rock solid relatively cheap and not worth it to
       | have custom build hardware like this.
        
       | JonChesterfield wrote:
       | An interesting quirk of the later designs is they're bring your
       | own ram. That might be a worthwhile thing to do with a pile of
       | DDR3 from an old server. I think I've got 256gb or so in a draw
       | somewhere that's otherwise unlikely to see any use.
       | 
       | Lithium battery strapped to consumer chips - so you can basically
       | ignore the volatility aspect (possibly in exchange for an
       | exciting new fire risk, not sure how professionally built these
       | things are). That might be objectively better than a pci-e ssd,
       | at least in terms of performance (particularly performance over
       | time).
        
         | outworlder wrote:
         | > Lithium battery strapped to consumer chips - so you can
         | basically ignore the volatility aspect (possibly in exchange
         | for an exciting new fire risk, not sure how professionally
         | built these things are)
         | 
         | Do LiFePO4 instead. Nothing is fireproof but those are pretty
         | tame.
        
         | PragmaticPulp wrote:
         | > An interesting quirk of the later designs is they're bring
         | your own ram. That might be a worthwhile thing to do with a
         | pile of DDR3 from an old server. I think I've got 256gb or so
         | in a draw somewhere that's otherwise unlikely to see any use.
         | 
         | The 256GB version is listed at $280. That's more than enough to
         | buy the fastest 2TB SSDs on the market which will match the
         | performance of this device for most real-world workloads.
         | 
         | Now that SSDs have become so fast, RAM disks really only help
         | with very specific workloads: Anything with a massive number of
         | small writes or anything that needs extreme durability.
         | 
         | These could be useful for certain distributed computing and
         | datacenter applications where constant drive writes would wear
         | out normal SSDs too fast.
         | 
         | For most people, buying the card just to make use of some old
         | DIMMs would cost a lot of money for virtually zero real-world
         | performance gain. Modern NVMe SSDs are very fast and it's rare
         | to find a workload that has extreme levels of random writes.
        
           | JonChesterfield wrote:
           | That's the previous version with memory soldered on. There's
           | no price listed for the one with DIMM slots.
           | 
           | However it turns out I was way out of date on nvme pricing.
           | So that's awesome, if fairly bad news for this product.
        
       | szpght wrote:
       | finally a worthy successor for optane
        
       | guenthert wrote:
       | Uh, in which case is that supposed to fit? From the pictures it
       | looks like its a few inches too tall (not flush with bracket). I
       | seem to be missing something obvious.
        
         | wmf wrote:
         | Graphics cards are also that tall, so I guess this will fit in
         | a gaming PC.
        
       | DeathArrow wrote:
       | Isn't it more simple to buy a MB with 16 RAM slots? And more
       | performant?
        
         | AnthonBerg wrote:
         | It's interesting to consider the difference.
         | 
         | The price grows when going to server/workstation motherboards /
         | CPUs.
         | 
         | And: What if you already have a 16-slot motherboard fully
         | populated with RAM? You can add a whole another computer with
         | 16 more slots, but that's quite a bit of iron, _and_ : How best
         | to connect the two? Does there exist an interlink that shunts
         | data between two computers at full PCIe 4.0 x4 speed? Or x8?
         | And how to control processing on the second computer?
         | 
         | I'm sure there are bigger motherboards yet, but afaik it always
         | comes with further components - say, more physical CPU sockets
         | that need to be populated?
         | 
         | There are probably situations where this hardware is the simple
         | way of doing a job.
         | 
         | Also: If the current motherboard already has a unused PCI slot,
         | then it's kiiiiind of a free return on investment to _use_ that
         | bandwidth. By putting the existing I /O controller to use.
        
         | simplotek wrote:
         | > Isn't it more simple to buy a MB with 16 RAM slots? A
         | 
         | Why do you think it's a good idea to assemble a whole new
         | computer just because you want more storage?
        
         | davidy123 wrote:
         | This board has a battery, so the memory is retained for up to a
         | year between reboots. So you can copy your data to it once, and
         | it's super fast.
         | 
         | Though, using ordinary RAM and initializing it before use,
         | copying say 128GB is only going to take a few seconds these
         | days.
        
           | DeathArrow wrote:
           | The point was more in line of RAM bus being faster than PCIe.
        
         | numpad0 wrote:
         | Often those require server RAMs.
        
       | tristor wrote:
       | I used to use a software called SuperSpeed RAMDisk on my gaming
       | PC, because I had what at the time (over a decade ago) was a
       | ginormous amount of RAM (32GB) and an at that time relatively new
       | SATA SSD array, and I would put entire games into memory to
       | nearly eliminate loading screens. These days, nVME SSDs are so
       | fast in typical use cases that I don't see much benefit to this.
       | It'd be interesting to have, but I'd rather get an PCI-E SSD vs
       | wasting that slot for a RAMDisk.
        
       ___________________________________________________________________
       (page generated 2023-03-17 23:01 UTC)