Post ArTWvCgl6OlUIX3LGK by snacks@netzsphaere.xyz
(DIR) More posts by snacks@netzsphaere.xyz
(DIR) Post #ArTOTIKHaaWJxKugYy by latein@cawfee.club
2025-02-25T07:33:05.369581Z
0 likes, 1 repeats
@s8n @fiore @kirby @evamik
(DIR) Post #ArTTtj4DI0ieBuosyG by kirby@annihilation.social
2025-02-25T07:05:07.365278Z
1 likes, 1 repeats
@dcc @fiore @snacks but likeyou have to go through every single row of the database, unmarshal the json, look at the value in a key to find what you're looking for if you're making a query......... that's hardly efficient
(DIR) Post #ArTTtjlShDJWM33QNk by phnt@fluffytail.org
2025-02-25T07:11:14.833691Z
3 likes, 0 repeats
@kirby @dcc @fiore @snacks The JSON for Pleroma is stored as a binary format, there's no need for parsing it once it is in the DB. The overhead is at insertion-time. Once it's in the DB, Postgres has it's own representation of it, that it can understand without re-parsing it all the time. It does not store whitespace in the JSON itself, it does not store duplicate keys in the JSON.To skim over the details, it's basically stored in the already parsed representation. To simplify even more, imagine Postgres ran jq on the JSON and stored the internal representation of the JSON made by jq.
(DIR) Post #ArTTtkcdVVqf0xvtGS by dcc@annihilation.social
2025-02-25T07:12:48.036085Z
0 likes, 1 repeats
@phnt @kirby @fiore @snacks Btw do you know why kirbs dumps don't seem to have 20 gb of data? (activates and notifications)
(DIR) Post #ArTTtlAfSz4QiVr4JE by kirby@annihilation.social
2025-02-25T07:14:35.869828Z
0 likes, 1 repeats
@dcc @phnt @fiore @snacks oh yeah we've been having this problem where we have a 40 gb data dump but only 20 gb of data ends up inside postgres and there seem to be no activities inside the imported database when we start up pleromai also had this issue with fre but that was solved by making a custom format postgres dump and waiting 2 hours for the entire thing to import into another machine, so i thought nothing of it. but we tried that here and after 1 or 2 days of importing the entire thing we ended up with the same problem
(DIR) Post #ArTTtloN5MpUheQmC8 by dcc@annihilation.social
2025-02-25T07:15:04.620495Z
1 likes, 1 repeats
@kirby @fiore @phnt @snacks Separate data and schema
(DIR) Post #ArTTtqnqWkNeBTDvZQ by kirby@annihilation.social
2025-02-25T07:15:31.118306Z
0 likes, 1 repeats
@dcc @fiore @phnt @snacks that's for this attempt which only took about an hour or two to import into the db
(DIR) Post #ArTU2Wqc5eGQtqejZo by dcc@annihilation.social
2025-02-25T07:16:30.121073Z
0 likes, 1 repeats
@phnt @kirby @fiore @snacks This time (the second time) with seprate schema and data (the data is the right size) it only is 20gb in postgres.
(DIR) Post #ArTUEtrWLdjBklymye by phnt@fluffytail.org
2025-02-25T07:18:19.697486Z
1 likes, 0 repeats
@dcc @kirby @fiore @snacks Are indexes built properly? Did psql spit out any errors when ingesting?I don't see a simple reason as to why it would by so small after an import. Even FluffyTail is bigger than 20GB after 1.5 years.
(DIR) Post #ArTUIzgIeffsxWgPke by dcc@annihilation.social
2025-02-25T07:18:55.888090Z
0 likes, 1 repeats
@phnt @kirby @fiore @snacks You see whats weridscreenshot_25_February_24_23-17-16.pngscreenshot_25_February_24_23-18-45.png
(DIR) Post #ArTUMS5BW2jdufyJFY by kirby@annihilation.social
2025-02-25T07:20:59.872024Z
0 likes, 1 repeats
@phnt @dcc @fiore @snacks there was this one error during the import process but this is likely being spit out cause i fucked with the database by bookmarking an object and then deleting the object from the db last yearpsql:p_data.sql:28793676: ERROR: insert or update on table "bookmarks" violates foreign key constraint "bookmarks_activity_id_fkey"DETAIL: Key (activity_id)=(00000189-c415-0f21-5936-3173ee4a0000) is not present in table "activities".
(DIR) Post #ArTUrsiEiVTDZ923bU by phnt@fluffytail.org
2025-02-25T07:24:07.365908Z
1 likes, 0 repeats
@dcc @kirby @fiore @snacks Did psql spit out any errors when ingesting?I suspect this, if psql was not ran with -v ON_ERROR_EXIT=1. It could have thrown an error, skipped a big table (likely activities or objects) and continued the import. There should be zero errors coming from psql and the only exception to that is psql complaining about a different version of Postgres used for the dump, than what is running currently on the import. In that case, it can be ignored and ran without -v ON_ERROR_EXIT=1, but there still shouldn't be any other warnings or errors.
(DIR) Post #ArTUss2YhYyHlq0eFU by graf@poa.st
2025-02-25T07:26:11.170804Z
1 likes, 0 repeats
@phnt @dcc @kirby @fiore @snacks yeah he's 100% missing half his dump. kirby do you have access to the original data or just this backup?
(DIR) Post #ArTUssgyHJIVnAuvEu by kirby@annihilation.social
2025-02-25T07:26:39.529405Z
0 likes, 1 repeats
@graf @dcc @fiore @phnt @snacks i have the original database on hand still
(DIR) Post #ArTUtsm5SOEGKzmsoy by phnt@fluffytail.org
2025-02-25T07:25:24.435296Z
2 likes, 0 repeats
@kirby @dcc @fiore @snacks That's a big fail, as the foreign key cannot find an activity with that id in the activities table. At least that's what I think is happening. The activities table might be incomplete after an import.
(DIR) Post #ArTVJ2jXZD7ISqasQy by snacks@netzsphaere.xyz
2025-02-25T07:26:56.850686Z
1 likes, 0 repeats
@kirby @dcc @fiore @phnt idk if that's actually the case, but it might prevent a lot if stuff from being imported since fk dependencies aren't met.Don't take my word for it tho, i know jackshit about the pleroma db apart from the json thing
(DIR) Post #ArTVOIUwNl8qnyGowC by graf@poa.st
2025-02-25T07:29:13.102970Z
0 likes, 0 repeats
@kirby @dcc @fiore @phnt @snacks https://annihilation.social/objects/3ed8dd6a-3446-462a-9eea-2a1c9e8d4a63was there any error spit out here? we had to restore poast from a backup once after gleason added editing to rebased and it took 13 hours (and I had a whole nights sleep) before I realized you could run it with --jobs
(DIR) Post #ArTVOJIDQYYbGnKAk4 by kirby@annihilation.social
2025-02-25T07:30:40.467163Z
0 likes, 1 repeats
@graf @dcc @fiore @phnt @snacks aside from this no
(DIR) Post #ArTVnlgD1RMyYAgcDo by phnt@fluffytail.org
2025-02-25T07:36:00.932990Z
1 likes, 0 repeats
@dcc @fiore @kirby @snacks btw the variable isn't ON_ERROR_EXIT, but ON_ERROR_STOP.
(DIR) Post #ArTVyHbpAHGkTNeJuK by fiore@brain.worm.pink
2025-02-25T06:46:12.563071Z
4 likes, 2 repeats
when fedi admins talk about the database it feels like theyre interacting with some sort of cosmic horror creature
(DIR) Post #ArTWvBZHGjBcp2Xqxk by kirby@mai.waifuism.life
2025-02-25T06:46:21.159Z
1 likes, 0 repeats
@fiore@brain.worm.pink as of right now i'm getting the impression that it's exactly that
(DIR) Post #ArTWvCgl6OlUIX3LGK by snacks@netzsphaere.xyz
2025-02-25T06:47:16.260902Z
0 likes, 0 repeats
@kirby @fiore nah, it just works
(DIR) Post #ArTWvDGuvxgk6fyDce by kirby@mai.waifuism.life
2025-02-25T06:48:16.356Z
0 likes, 0 repeats
@snacks@netzsphaere.xyz @fiore@brain.worm.pink :toddhoward: It just works.
(DIR) Post #ArTWvDsqew1u0JiVkG by snacks@netzsphaere.xyz
2025-02-25T06:49:06.261921Z
0 likes, 1 repeats
@kirby @fiore as shrimple as that
(DIR) Post #ArTWvK3Linzj8cfywi by snacks@netzsphaere.xyz
2025-02-25T06:47:47.189266Z
0 likes, 0 repeats
@kirby @fiore apart from the segfaults
(DIR) Post #ArTYGFsUXWb0Dz8uuW by tisanae@brain.worm.pink
2025-02-25T11:32:33.838910Z
1 likes, 0 repeats
@fiore chillen at the database
(DIR) Post #ArTYGZaRGM2TLmqmY4 by fiore@brain.worm.pink
2025-02-25T11:34:43.336269Z
0 likes, 0 repeats
@tisanae YEA EXACTLY
(DIR) Post #ArTYUiJ3dkEYRqSmiO by fiore@brain.worm.pink
2025-02-25T06:52:05.386859Z
2 likes, 0 repeats
its either that or i should just learn sql . but since i have no vices and i am perfect,
(DIR) Post #ArTYUj7kbGmcz4BGjI by snacks@netzsphaere.xyz
2025-02-25T06:53:35.094681Z
0 likes, 0 repeats
@fiore sql doesn't help you with pleroma
(DIR) Post #ArTYUk1PGLIplgDiTo by fiore@brain.worm.pink
2025-02-25T06:55:26.259597Z
0 likes, 0 repeats
@snacks what db does pleroma use
(DIR) Post #ArTYUklqTgRw5hwnrc by snacks@netzsphaere.xyz
2025-02-25T06:57:51.556963Z
0 likes, 0 repeats
@fiore postgres, but almost everything is stored in json, not in tables and row
(DIR) Post #ArTYUlH2bhP3eSXiUK by Suiseiseki@freesoftwareextremist.com
2025-02-25T11:37:10.729393Z
1 likes, 0 repeats
@snacks @fiore >almost everything is stored in json, not in tables and row>Have database.>Don't use it at you should use databases - just JSON encode everything, killing the performance.
(DIR) Post #ArTqUjrBfpnuQpaYoC by dcc@annihilation.social
2025-02-25T06:46:52.998543Z
0 likes, 1 repeats
@fiore Its nothing special, its just a big text file
(DIR) Post #ArTqjop3UgewAymYnQ by fiore@brain.worm.pink
2025-02-25T06:48:34.788925Z
0 likes, 0 repeats
@dcc shouldve just used json smh
(DIR) Post #ArTqjsOcCvt1G8RV8j by dcc@annihilation.social
2025-02-25T06:49:27.096117Z
0 likes, 1 repeats
@fiore You can make a db out of one text file, its just not efficient.
(DIR) Post #ArTrjJ0BtwafRPJHOK by kirby@annihilation.social
2025-02-25T06:58:30.654485Z
0 likes, 1 repeats
@snacks @fiore :chirumiru_cirno_dance:the efficiency of such a schema is widely disputed
(DIR) Post #ArTs1pULr8pzPOFZmi by dcc@annihilation.social
2025-02-25T07:05:48.682120Z
1 likes, 1 repeats
@kirby @fiore @snacks You have to spit something some way, maybe i could think of a better way my self (my vps biz is just going to use sqlite lol)
(DIR) Post #ArTs2vjG8jAP3JkRSi by dcc@annihilation.social
2025-02-25T07:03:44.679203Z
1 likes, 1 repeats
@kirby @snacks @fiore You just need to know json and postgres....... WAIT A SECOND
(DIR) Post #ArTsHDbQTzLYXathGC by phnt@fluffytail.org
2025-02-25T07:03:52.233279Z
1 likes, 0 repeats
@snacks @fiore It makes sense compared to normalizing every object to 30 columns every time it comes in, or needs to go out. The big downside is the huge size of the tables and indexes needed for it to work quickly. With indexes, the performance is about the same as a normal DB schema.
(DIR) Post #ArTsV1Hr4KJOOmK9b6 by kirby@annihilation.social
2025-02-25T07:07:47.973762Z
1 likes, 1 repeats
@dcc @fiore @snacks you could just have a traditional database schema set up you don't have to store json inside the database like pleroma does
(DIR) Post #ArTsvSEfUsczVWRFgG by kirby@annihilation.social
2025-02-25T07:12:28.452563Z
0 likes, 1 repeats
@phnt @dcc @fiore @snacks still leaves unpredictable size problems and the thing you just mentioned
(DIR) Post #ArTvEHa7X4H1sJZv04 by grips@cawfee.club
2025-02-25T15:51:56.266286Z
0 likes, 0 repeats
@graf @dcc @kirby @fiore @phnt @snacks YOU CAN DO THAT????? :exploding_boar:
(DIR) Post #ArWYUe0AEBj0ZUTMMy by mikoto@akko.wtf
2025-02-26T22:17:36.524885Z
3 likes, 0 repeats
livin in the database, databasejust livin in the databasewo-oah
(DIR) Post #ArWdkWumqLsgzqLre4 by steffo@a.junimo.party
2025-02-26T22:19:27Z
1 likes, 0 repeats
@fiore for akkoma admins that's true