Post ASqo7gmNlRl3MFHWq0 by edendestroyer@mindly.social
(DIR) More posts by edendestroyer@mindly.social
(DIR) Post #ASqhqlY8AzprVKAAOu by Codeberg@social.anoxinon.de
2023-02-19T20:55:55Z
0 likes, 1 repeats
Contributing to Codeberg: If you can't continuously contribute to Codeberg, but have a lot of experience, e.g. with #Ceph, #Haproxy, #LXC, #Linux and #Networking etc, please bookmark and occasionally check https://codeberg.org/Codeberg-Infrastructure/techstack-support for open issues and questions.This way, you can share your knowledge without committing sustained energy to the project.PS: We are also looking for people to manage discussions and ensure the results are summarized and communicated back.
(DIR) Post #ASqiACCrI5oFgITh2W by selea@social.linux.pizza
2023-02-19T21:01:19Z
0 likes, 0 repeats
@Codeberg Saw that codeberg looked for some hot/cold data tiering for ceph.Here is what I have done on atleast two clusters, and it works fine https://docs.ceph.com/en/latest/rados/operations/cache-tiering/
(DIR) Post #ASqiNMXeAxI7V4TojQ by Sexypink@social.fbxl.net
2023-02-19T21:02:35.966233Z
1 likes, 0 repeats
@Codeberg can I Contribute my ass?
(DIR) Post #ASqo7gmNlRl3MFHWq0 by edendestroyer@mindly.social
2023-02-19T22:05:44Z
0 likes, 0 repeats
@pichan
(DIR) Post #ASqohjB6K6vzDq1oWW by Codeberg@social.anoxinon.de
2023-02-19T22:14:34Z
0 likes, 0 repeats
@selea Sounds interesting. What is the workload? Could it be comparable with many small accesses?I fear that crawlers fetching all public content regularly could easily mess with the caching, that's why we considered separating Git files (many small reads) from regular files (one linear read) and keep the latter on HDDs instead.~ fnetx
(DIR) Post #ASqrooyzRKnd7ARBmC by selea@social.linux.pizza
2023-02-19T22:49:32Z
0 likes, 0 repeats
@Codeberg You can configure ceph to only add objects in the 'hot tier' after certain amounts of 'hits in x amount of time'.Another thought, how do you store the metadata? You can choose to store the metadata on ssd only - for us, that gave us some improvement.But spontaneously, I think that doing some kind of HTTP cache (varnish or something) is the best and probably less expensive solution in the long run - you dont have to spend money on SSDs/NVME drives
(DIR) Post #ASqrujWedrCNvZ0F3w by selea@social.linux.pizza
2023-02-19T22:50:36Z
0 likes, 0 repeats
@Codeberg Our workload is random small files (nextcloud), official Linux mirror, and S3 storage for private use. So very mixed
(DIR) Post #ASr3XD963wUGIBXB5s by jwalzer@infosec.exchange
2023-02-20T00:58:26Z
0 likes, 0 repeats
@Codeberg I left some comments on some of your issues ..
(DIR) Post #ASrmPls7Yzg9c9Q7m4 by Codeberg@social.anoxinon.de
2023-02-20T09:20:07Z
0 likes, 0 repeats
@jwalzer Thank you.