Post AQB2zXvsMPRCCRWz9k by sigi714@ruhr.social
(DIR) More posts by sigi714@ruhr.social
(DIR) Post #AQABnbOMNSwZspH25Q by jerry@infosec.exchange
2022-12-01T12:55:31Z
0 likes, 0 repeats
I knew it was gonna be expensive đ
(DIR) Post #AQABnbyAELaFfs1ctU by ruud@mastodon.world
2022-12-01T12:56:41Z
0 likes, 0 repeats
@jerry Wow.. what happened? Surely my bill will be lower..
(DIR) Post #AQAC1RVRveQJHHBoG0 by jerry@infosec.exchange
2022-12-01T12:59:11Z
0 likes, 0 repeats
@ruud itâll be lower next time. I was on a mad scramble to handle the load and overbought capacity
(DIR) Post #AQAC6DwHnZlgeE1QBM by ruud@mastodon.world
2022-12-01T13:00:05Z
0 likes, 0 repeats
@jerry I know the feeling.. My cost will be mainly Mailgun, which was an emergency solution. I'll be moving to a cheaper solution soon.
(DIR) Post #AQACFUJAJB881P8nXE by jerry@infosec.exchange
2022-12-01T13:01:43Z
0 likes, 0 repeats
@ruud yep. In an emergency move of desperation, I rented a custom configured AMD Epyc system. It got me over the hump, but I was really happy to just click the cancel button on it.
(DIR) Post #AQACNkj59O5BG3Cj0y by jerry@infosec.exchange
2022-12-01T13:03:12Z
0 likes, 0 repeats
@ruud out of curiosity, have you considered doing your own mail server? I self host and have exactly one domain I canât send mail to (t-online)
(DIR) Post #AQACPnCstkmxWy7Ixc by ruud@mastodon.world
2022-12-01T13:03:37Z
0 likes, 0 repeats
@jerry Oh, I run on an EPIC now too, AX161, but that's only 283 EUR/mThat's not too much considering it runs a 120k user instance without problems, under 25% cpu/ram usage.
(DIR) Post #AQACY3qytF5FTaoYXg by ruud@mastodon.world
2022-12-01T13:05:05Z
0 likes, 0 repeats
@jerry I was self-hosting the mail (using mail-in-a-box) but that stopped working when Mastodon sent 15.000 mails/day(It even peaked at 75000 mails a day on mailgun)
(DIR) Post #AQACYanEQCgzdWJWmO by jerry@infosec.exchange
2022-12-01T13:05:07Z
0 likes, 0 repeats
@ruud how many active users do you have? I have ~30k active and I was swamping the Epyc and moved to a fleet of smaller ax101âs
(DIR) Post #AQACdhpJhLiStmjAXY by jerry@infosec.exchange
2022-12-01T13:06:06Z
0 likes, 0 repeats
@ruud the epyc I configured had several 8TB nvmes, pushing the price over $500/month
(DIR) Post #AQACdtVuFLJt93jpx2 by ruud@mastodon.world
2022-12-01T13:06:09Z
0 likes, 0 repeats
@jerry No idea. Admin area says 132k active, but I only have about 125k users.
(DIR) Post #AQACgpicoPdQFD5Z2m by ruud@mastodon.world
2022-12-01T13:06:41Z
0 likes, 0 repeats
@jerry Ah, yes I only use the local disks for the database, all media is on S3
(DIR) Post #AQAClFaGaVkrcdHl3Y by ruud@mastodon.world
2022-12-01T13:07:27Z
0 likes, 0 repeats
@jerry
(DIR) Post #AQACpI2FPqgONoIdNY by jerry@infosec.exchange
2022-12-01T13:08:11Z
0 likes, 0 repeats
@ruud you do all on one system?
(DIR) Post #AQACumjEj55l1b1laa by ruud@mastodon.world
2022-12-01T13:09:13Z
0 likes, 0 repeats
@jerry Yes, as long as it can handle it.. The less complexity, the less that can break. If we need to grow, I'm considering scaling with kubernetes on multiple smaller hosts. But I think we can grow a lot on this server
(DIR) Post #AQAD2I60VAbKKsG18i by jerry@infosec.exchange
2022-12-01T13:10:30Z
0 likes, 0 repeats
@ruud how much do you spend on AWS s3?
(DIR) Post #AQADBO8fP5qbgRq6aW by ruud@mastodon.world
2022-12-01T13:12:12Z
0 likes, 0 repeats
@jerry It's not in AWS. It's in Wasabi. Thats 6EUR/TB/m but they count everything for at least 90 days so cache cleanup doesn't help :-)
(DIR) Post #AQB2zXvsMPRCCRWz9k by sigi714@ruhr.social
2022-12-01T15:43:36Z
1 likes, 1 repeats
@jerry
(DIR) Post #AQBrudT5RlgxaBTso4 by dinosm@infosec.exchange
2022-12-01T13:23:39Z
0 likes, 0 repeats
@jerry @ruud how easy is it to distribute to several servers with a shared DB (?) and a load balancer? Is this more or less your setup?
(DIR) Post #AQBruemGZwczdxSiLA by jerry@infosec.exchange
2022-12-01T13:29:20Z
0 likes, 0 repeats
@dinosm @ruud other than redis, the database is the least taxing component. I have 2 systems each serving sidekiq jobs, nginx/puma, and nginx/minio (for object storage). And just one database server. When built this out, we were adding 2-3k new accounts per day and I had no idea when it would stop, so I built it to be horizontally scalable - I can add additional puma or Sidekiq or minio servers in a matter of minutes. But growth is tapering off now. I have to decide if I want to spin up a more parallel instances on the same infrastructure or start consolidating.
(DIR) Post #AQBrufBR4MlEu1EoZU by HMHackMaster@mastodon.social
2022-12-01T17:58:19Z
0 likes, 0 repeats
@jerry @dinosm @ruud I am reading all of this and I can't help but wonder if dedicated colo space is gonna be cheaper in the long run vs renting elastic compute.I am no where close to your scale, but I have 5 physical hosts & a few SANs in a colo and it's so nice to pay once for hardware. I have room in my cabinet :)
(DIR) Post #AQBrufnim1Nyol9OFM by jerry@infosec.exchange
2022-12-01T18:06:57Z
0 likes, 0 repeats
@HMHackMaster @dinosm @ruud itâs a trade off. In this case, I was able to rapidly expand the numbers if bare metal servers I had from 2 to 8 in a matter of minutes and I am done with one of them (which costs $450/month) and they take it back. If I have a hardware issue, I can call them and tell them to âfix your sh@t!â or order a new one and be back up and running in minutes. I had servers in a colo for many years and i agree that itâs much nicer, cost wise, but for where Iâm at, this consumption model fits my needs better.
(DIR) Post #AQBrugJGsicgObuaQK by jerry@infosec.exchange
2022-12-01T18:09:05Z
0 likes, 0 repeats
@HMHackMaster @dinosm @ruud also, that 2800 euro bill is for 12 systems and includes about 900 euro of one time set up costs. I think Iâll end up in the $1500/month range for December and will start looking at cheaper ways to host now that things are stable
(DIR) Post #AQBrugmL8dsJqlVnjU by HMHackMaster@mastodon.social
2022-12-01T18:13:05Z
0 likes, 0 repeats
@jerry @dinosm @ruud Yea the ability to expand that quick is fantastic and it would certainly be neat to be able to dynamically add compute resources (to sidekiq, for example) as load fluctuates. But then you enter the dimensioning-rate-of-return land.I had a vendor charge us for 2 years of an EC2 instance. The physical server would have cost 2.5 months of the EC2 cost.And I have saved so much $ and headache vs trying to worry about cloud per-hour costs.And I like dealing with hardware. đ¤ˇđ
(DIR) Post #AQBruhARh19p3Wn3J2 by HMHackMaster@mastodon.social
2022-12-01T18:20:18Z
0 likes, 0 repeats
@jerry @dinosm @ruud As a side bar, I would love to see your Mastodon topology/setup/config. There is a ton of posts on people's setups but so much of it misses (what I see as) basic steps.Like how did you set up your Sidekiq servers? Do you have everything installed and just the web & streaming services disabled?
(DIR) Post #AQBruhXqI1sAE5jjm4 by ruud@mastodon.world
2022-12-02T08:23:16Z
0 likes, 0 repeats
@HMHackMaster @jerry @dinosm I run everything on 1 server, in Docker containers.
(DIR) Post #AQCsKRqbT9WQEFkUoi by mijndert@toot.community
2022-12-02T20:02:38Z
1 likes, 0 repeats
@ruud @jerry you do realize AMD Epyc CPUs are made for HPC purposes, right? Itâs complete and utter overkill for running Mastodon. I also donât think having an instance with 120.000 people is particularly good for the network. More so when all these people are crammed into 1 server / a single point of failure. To each their own.
(DIR) Post #AQCsYB3ShUSOCZJHVY by ruud@mastodon.world
2022-12-02T20:05:09Z
0 likes, 0 repeats
@mijndert @jerry Overkill? I see other admins struggling with performance. I have no issues. I pay less than most of them. I'm content.I agree that people should be spread more evenly. But that's up to the people of joinmastodon, they should make an algorithm of which server to show first in the app. Users just want to click and use.
(DIR) Post #AQCt14vyXJF9cHabdA by ruud@mastodon.world
2022-12-02T20:10:18Z
0 likes, 0 repeats
@mijndert @jerry I think we disagree on that point. That's fine.By the way, we do have backups (on 2 locations) and 2 extra admins. So I think it's way less of a risk than on 90% of the 1-admin rapberry-pi instances. My opinion.
(DIR) Post #AQCtRTEeo6GmVyucsq by mijndert@toot.community
2022-12-02T20:08:12Z
1 likes, 0 repeats
@ruud @jerry youâre blaming the unsuspecting people for your insane user count? Instead of closing registrations to protect the network? And youâre fine with cramming all these people on 1 server that can crash and burn at any given moment? I think you need to stop blaming others and not be too proud of what youâre doing here. But again, to each their own. I for one wouldnât feel comfortable on your instance.
(DIR) Post #AQCtRZJ8ENPjLb3Hgu by ruud@mastodon.world
2022-12-02T20:15:10Z
0 likes, 0 repeats
@mijndert By the way, your instance has 27k users. At what point will you close registrations??
(DIR) Post #AQCtdpaRWyq9JMzqBk by mijndert@toot.community
2022-12-02T20:17:20Z
0 likes, 0 repeats
@ruud weâve been optimizing the living shit out of it and weâre running at lower costs now than we did at 15k people. Also, we run on an autoscaling, self healing, event-driven architecture. No single points of failure here so way less of a risk. But thanks for asking.
(DIR) Post #AQCussZun5pMt4UN4y by jerry@infosec.exchange
2022-12-02T20:31:14Z
0 likes, 0 repeats
@ruud @mijndert thatâs the paradox. I strongly suspect most larger instances are much more resilient and less prone to failure/shut down than smaller ones, but it has a larger impact in the unlikely event that it does happen