Post B0Un6WpsSiJFTF0yES by graf@poa.st
 (DIR) More posts by graf@poa.st
 (DIR) Post #B0T9gd0KoWcthdbYmm by willow2@femcel.online
       2025-11-21T11:48:09.658444Z
       
       1 likes, 0 repeats
       
       @p how much RAM is your Pleroma instance using? and are you able to horizontally scale it? when I finally get around to learning elixir I'll probably work on a patch that lets you do that. no excuse not to scale on the Erlang VM.
       
 (DIR) Post #B0TbegWDZgHijuNbFY by p
       2025-11-21T17:01:37.084602Z
       
       5 likes, 0 repeats
       
       @willow2 BEAM isn't the main bottleneck; 280MB resident.  The main scaling issue with Pleroma is Postgres, it's I/O.Pleroma itself would require some code changes to scale horizontally, but because it's not the main bottleneck, I don't think anyone's still working on that.  On the other hand, you'd probably become the main person of fedi if you managed to get Pleroma to cope with read replicas.  I don't know if Ecto has any built-in support for this (e.g., load-balancing connection pools around read replicas for SELECTs but always sending writes to the master), but if so then getting Pleroma to cope might just be writing a few paragraphs about how to configure the setup; otherwise, probably a lot of work.
       
 (DIR) Post #B0UTBfssFYV3tkao4W by willow2@femcel.online
       2025-11-22T03:01:24.996881Z
       
       1 likes, 0 repeats
       
       @p didn't post use like a 128 GB RAM VPS or something? I want big instances to be able to split up into replicas like you said
       
 (DIR) Post #B0UlcTnl4wcBQ18C6C by p
       2025-11-22T06:27:58.865781Z
       
       3 likes, 0 repeats
       
       @willow2 Yeah, Poast is on a regular giant machine, no VPS; 256GB RAM, separate similarly sized machine for Postgres.  FSE, before that machine exploded, was on a refurb that had 384GB of DDR3 (:terrylol2:) in it, which was so much RAM that I had trouble convincing Postgres to fill it up, and then FSE was on some VM inside the Postgres machine and it had something like 4GB, 8GB dedicated to it, because Pleroma itself wasn't too heavy.> I want big instances to be able to split up into replicasAbsolutely would be nice.  I mean, you know my plan for that.
       
 (DIR) Post #B0UltrjnL1yOZtZFaK by graf@poa.st
       2025-11-22T06:31:04.859266Z
       
       2 likes, 0 repeats
       
       @p @willow2 this is false, we broke one of our dedicated servers mobos by increasing the ram to 512G. currently our postgres server has 512G and our application and web server has 256G (but its the fastest DDR5 you can get) both are EPYC but the webserver and application server is a 7302 instead of a 7B12 (16c/32t instead of 64/128) Poast hasn't been on a VPS since early 2021. Not possible for most of our traffic history without $$$$$
       
 (DIR) Post #B0UmCediGEKZSOgLLc by graf@poa.st
       2025-11-22T06:34:28.617447Z
       
       1 likes, 0 repeats
       
       @p @willow2 enterprise NVMe in raid10 btw, I think the throughput is like 8000MB/s or something
       
 (DIR) Post #B0Umr9OdKLGEQlwFxQ by p
       2025-11-22T06:41:50.252955Z
       
       1 likes, 0 repeats
       
       @graf @willow2 > currently our postgres server has 512GAw dang.  I actually ssh'd in and did `free -m` to check so that I got the number right, but I didn't check the Postgres server.
       
 (DIR) Post #B0Un6WpsSiJFTF0yES by graf@poa.st
       2025-11-22T06:44:34.440529Z
       
       3 likes, 0 repeats
       
       @p @willow2 Yeah we put 512G in the other mobo but it had a bad dimm slot and it cooked the board. I got a jpeg from when we pulled it out of the cage on my phone I'll dig out later to show you. All the diag lights bright red and beeping like crazy. That was when the site was down for a day while we tried to nig up a chassis that would support the original 3rd gen proc. We are on borrowed hardware right now. Got to move back over to the new chassis the users bought hopefully by the end of the year or in the very early new year
       
 (DIR) Post #B0WLw3sEyhDSIxpAMi by willow2@femcel.online
       2025-11-23T00:49:30.016116Z
       
       1 likes, 0 repeats
       
       @graf @p yeah that's atrocious. a fleet of VPSs would be way cheaper than a VPS with those specs, if one didn't want to build a machine with half a fucking terabyte of RAM. /notmad
       
 (DIR) Post #B0WM43ymTBA6wshOQC by graf@poa.st
       2025-11-23T00:51:00.995789Z
       
       1 likes, 0 repeats
       
       @willow2 @p We get great rates on hardware. ..except ddr5 now which is what all our hardware uses. I'm not adverse to sharding the database btw maybe with citus https://www.citusdata.com/blog/2023/07/18/citus-12-sch...
       
 (DIR) Post #B0XBmYYNFiIqB0JwXI by tidux@poa.st
       2025-11-23T10:30:32.208971Z
       
       0 likes, 0 repeats
       
       @graf @willow2 @p There's also Yugabyte but that'd be a full on DB migration since it's not actually PostgreSQL underneath.  https://www.yugabyte.com/
       
 (DIR) Post #B0XDW2p7ehKf2jPPpw by willow2@femcel.online
       2025-11-23T10:49:57.228157Z
       
       1 likes, 0 repeats
       
       @tidux @graf @p “Meet YugabyteDB, the AI-ready, multi-modal, distributed PostgreSQL database for cloud-native apps.” (emphasis mine)
       
 (DIR) Post #B0XDfu2j9kGChrTPN2 by tidux@poa.st
       2025-11-23T10:51:45.206070Z
       
       1 likes, 0 repeats
       
       @willow2 @p @graf Look at the source code.  It's (mostly) client-compatible but totally different inside.https://github.com/yugabyte/yugabyte-db
       
 (DIR) Post #B0XTPORMU87hKG94Uq by graf@poa.st
       2025-11-23T13:48:01.140198Z
       
       1 likes, 1 repeats
       
       @willow2 @tidux @p >AI>Database No thank you
       
 (DIR) Post #B0YWTlFu5vQAGTWK1Y by tidux@poa.st
       2025-11-24T01:57:10.087512Z
       
       0 likes, 0 repeats
       
       @graf@willow2 @p It's not AI you retard, they're just using "AI-ready" as a marketing term like "cloud ready" or "web scale".  Learning how business weasels use language is how you avoid going bankrupt.
       
 (DIR) Post #B0YZx7Ol9A1tCSLvY8 by p
       2025-11-24T02:36:05.763332Z
       
       0 likes, 0 repeats
       
       @tidux @willow2 @graf Well, there are all sorts of schemes for altering the data storage; try it if you want.
       
 (DIR) Post #B0YaXpaqJC3Qh1ViN6 by p
       2025-11-24T02:42:43.861597Z
       
       0 likes, 0 repeats
       
       @tidux @graf @willow2 Well, it's a flag.  A database that goes hard on marketing is a commercial product.
       
 (DIR) Post #B0YlAKkRDCcpxFsHjs by graf@poa.st
       2025-11-24T04:41:41.472935Z
       
       1 likes, 0 repeats
       
       @tidux @p @willow2 Relax man
       
 (DIR) Post #B0j6yzpMnJxCSRmJV2 by willow2@femcel.online
       2025-11-29T04:33:16.338629Z
       
       1 likes, 0 repeats
       
       @p I think it's probably gonna be a lot of work unless I isolate out the database access code and just make the read replicas dumb frontends for that. If that makes any sense? Otherwise, there's a lot of moving parts.
       
 (DIR) Post #B0kEk1iySA4Amx7ZMe by p
       2025-11-29T17:34:57.622718Z
       
       1 likes, 0 repeats
       
       @willow2 Yep, that's my read on it.  I looked at https://hexdocs.pm/ecto/3.13.3/replicas-and-dynamic-repositories.html just now and it looks like even that would require a lot of changes.That's weird to me, like, if the library is generating the SQL, it should know whether a query is a read or involves writing, so it should be able to automatically select a read-only replica for the reads.