Post A47aFmlS3OD1BOe99E by hlesesne@fosstodon.org
(DIR) More posts by hlesesne@fosstodon.org
(DIR) Post #A47TbST7UfrcvdSXOy by alex@gleasonator.com
2021-02-10T04:26:32.103027Z
0 likes, 0 repeats
Ahhh this already looks better than Ceph for block storage in K8s https://longhorn.io
(DIR) Post #A47aFmlS3OD1BOe99E by hlesesne@fosstodon.org
2021-02-10T05:39:32Z
1 likes, 0 repeats
@alex so far, we haven’t seen a lot of pain with ceph, but almost everything is POC. Any insight you could share on the pain points?
(DIR) Post #A47aReOB1EuSgLPx3I by alex@gleasonator.com
2021-02-10T05:43:13.297266Z
1 likes, 0 repeats
@hlesesne I’m short on time with not enough money to run a multi-node storage cluster. I’m just trying to aim for a basic solution on bare metal. I like that Longhorn looks straightforward and doesn’t try to do too much. I haven’t gotten around to actually trying Ceph because my brain is overflowing with new information
(DIR) Post #A47maW8AI7Lv7qvGcq by r000t@fedi.site
2021-02-10T07:59:18.213092Z
1 likes, 0 repeats
@alex auhguahhguhgauhughauhguahughghaguh fucking fhguh why fuok so you're already using.... containers. why are you using block storage with them? you give up like 20 amazing benefits of using a chrooted filesystem. it's more performant, backups were brought up a while ago, and on top of that, these are fedi servers, if you can't deduplicate across customers, you'll never be able to pay for storage. "but then I can't migrate them" vms are for migrating. containers are for shooting in the face and pushing the auto-romeo-maker button. migrating containers* is like washing paper plates. "but I need the storage to be elsewhere" nope. all the bulk storage you need can be done over s3. this gives you 10 times more options in terms of storage and makes them all interchangeable.
(DIR) Post #A48SF0JzYfkYSrsYSG by alex@gleasonator.com
2021-02-10T15:46:01.521000Z
0 likes, 0 repeats
@r000t I don’t get it. I can’t store a postgres database over S3.I was happy to use local storage, but I discovered last night that a pod can get scheduled to the wrong node and not be able to access it. I just need storage that works, that’s it.
(DIR) Post #A48SQH11pWulQTNmRU by alex@gleasonator.com
2021-02-10T15:48:03.336150Z
0 likes, 0 repeats
@r000t I’m not going to deduplicate postgres across customers. I’m gonna prune old posts. Deduplicating uploads isn’t a problem
(DIR) Post #A48Wc0DuUML6daFwg4 by r000t@fedi.site
2021-02-10T16:35:00.977851Z
1 likes, 0 repeats
@alex Postgres doesn't save to a block device. It saves to a file. But you should also consider that a postgres process for each site is already wasteful. While adapting pleroma to be "multi-tenant" is way out of scope, using a central postgres server means replications and easier backups anyway. And not to sound like a mongodb guy, but it scales way harder.
(DIR) Post #A48XMBi27IZza5Mvc8 by r000t@fedi.site
2021-02-10T16:43:21.274938Z
0 likes, 0 repeats
@alex Oh, and for s3 I said bulk storage. Media and shit. Postgres' local persistent storage isn't really all that large. Allocating 8GB locally per customer is a lot easier than 30 or more.
(DIR) Post #A48ZPLiJZnL7O28Tcu by alex@gleasonator.com
2021-02-10T17:06:19.724258Z
1 likes, 0 repeats
@r000t I’ve been really considering running one giant postgres cluster with a database per customer. It would just require a lot of changes right now and I want to launch this thing sometime in my lifetime… live and learn.