Post A8SaADNUjSas4FO7e4 by yes@social.handholding.io
(DIR) More posts by yes@social.handholding.io
(DIR) Post #A8SCrnczQxZK2xsyaO by yes@social.handholding.io
2021-06-19T19:55:40.831920Z
3 likes, 0 repeats
cloud computing is a codeword for "application depending on a remote web server on AWS that can be shut off by the whim of jeff bezos"
(DIR) Post #A8SDHrgOyoUs9VuWGG by fluffy@social.handholding.io
2021-06-19T20:00:23.172846Z
1 likes, 0 repeats
@yes "just use docker"
(DIR) Post #A8SDLBe3DZ1AvsOYCG by yes@social.handholding.io
2021-06-19T20:00:59.500089Z
0 likes, 0 repeats
@fluffy okay, now your docker containers are running of jeff bezos' servers. did anything change? no.
(DIR) Post #A8SECKtZYWGUyBzyLo by rozenglass@anime.website
2021-06-19T20:10:34.817763Z
2 likes, 1 repeats
@fluffy @yes Kubernetes clusters that span multiple cloud providers to form a "multi-cloud" are a thing apparently. If for some reason your services on a specific cloud provider were lost, theoretically, the system can off-load them to a cloud provider that is still in service.
(DIR) Post #A8SEI4jyX27KbWFKoi by rozenglass@anime.website
2021-06-19T20:11:37.862185Z
1 likes, 0 repeats
@yes they have the computers, just give them your data, and give them your money. all the rest is yours.
(DIR) Post #A8SEWzbRunur0wN5Rw by fluffy@social.handholding.io
2021-06-19T20:14:19.412221Z
2 likes, 0 repeats
@rozenglass @yes yes yes yes "just use k8s" nailed it
(DIR) Post #A8SEbWcv4l3casDjFo by tn5421@fedi.absturztau.be
2021-06-19T20:15:08.497998Z
0 likes, 0 repeats
@fluffy @rozenglass @yes just sacrifice ur firstborn to the internet gods
(DIR) Post #A8SImoo6Do4tyt5eaW by rozenglass@anime.website
2021-06-19T21:01:59.352700Z
1 likes, 0 repeats
@fluffy @yes yes! I mean, adding four or five million lines of code, written by 3000 different authors, and with 2000 open issues of which 300 are marked as bugs, is _bound_ to make your production environment a lot more reliable, and your data a lot more secure. Trust me, I tried, and now I must propagate this nonsense to justify the three years of my life that I wasted on this thing, and ensure that I will still find a job in the future... Maybe I should start a course about it too, and tell people how amazing and in-demand it is! /seh, excuse me, I feel dirty now, I must go wash my mouth with the gentoo on my server I guess.
(DIR) Post #A8SJovh7gXNBsLOi5A by fluffy@social.handholding.io
2021-06-19T21:13:35.474342Z
0 likes, 0 repeats
@rozenglass @yes what did you make
(DIR) Post #A8SLkpHdmM8sk4eKSe by yes@social.handholding.io
2021-06-19T21:35:15.883635Z
1 likes, 0 repeats
@fluffy @rozenglass probably a mess
(DIR) Post #A8SS7sLl8xkUdbwxrU by rozenglass@anime.website
2021-06-19T22:46:38.553948Z
2 likes, 2 repeats
@fluffy @yes I made, with one other dev, the backend / infrastructure stuffs for some mid-sized company, serving their mobile apps and internal services and what not. serves a few tens of thousands of users concurrently at any moment, and with some 20 million events daily. maintaining that for almost three years so far.k8s (and its ecosystem in general) is super complex, and has way way too many concepts, it was probably over-kill for our size. but tbh it wasn't that bad in practice, and some features are really comfy. but when things go wrong, you have to dig in deep, so many moving pieces all interacting in the weirdest ways. maybe I'm just too idealistic, wanting everything to be minimal and clean and simple and straight-to-the-point. I don't know. My current position is that you _probably_ don't need k8s, ever, because one good machine or two will serve you well in most cases, if your system is well built and written. k8s and its ilk are mostly redundant re-implementations of tools that we already had on the OS-level for a long time.But I also admit that there's no solution, on the OS-level, for abstracting and orchestrating large-scale clusters hosting critical applications. The closest we came to a "cluster-aware" operating system was probably Plan9. Linux is not cluster-aware, so we end up re-inventing everything in a cluster-aware system, at a higher level; service management, logging, monitoring, users and permissions, etc.It is obvious that we have a "hype" phase at our hands, with containers, and Docker, and Kubernetes, and "cloud-native technology". It's been going on for a few years now. Everyone thinks themselves the new Google, operating at Netflix-scale, or something, and wants the most complex system possible, thinking that it will make them the cool kids. I operated a Kubernetes cluster, and a Docker swarm, for a few years each, and so far I have very strong reservations against the mindless hype crowds, and their utopian dreams and claims. Most of it big-corp marketing and PR.Containers themselves, in their most fundamental sense, as kernel process namespaces with cgroups to define some limits, some virtual networks, etc. are pretty good imho. They are the simpler, minimalist, and more reasonable solution to most cases where virtual machines would have been traditionally used. I like containers. Docker and k8s are twisted corrupt overloaded corporate versions, far removed from their pure origins. If you are curious, try constructing containers by-hand, or using LXC, or something like that, and get a feel for the "containers as simpler better VMs" thing. Without all the trendy over-hyped nonsense.I just feel sad that many new kids these days are growing in a Kubernetes/Docker/Cloud world, thinking in their terms, and never learning about the simpler reliable good ol' stuff. The new kids be like "but how can I redirect traffic to my app pod on that node without a kubernetes ingress service?!?!!". and we have to sigh and respond: "you proxy-pass requests to your application hosted on that server by writing a goddamned config file for your NGINX reverse-proxy".On second thought, both sound stupid, everything sucks, and we're just a bunch of autistic nerds getting triggered all about it :PMay your machines clock true.
(DIR) Post #A8SSNj1XaMeoLZ3CK0 by rozenglass@anime.website
2021-06-19T22:49:30.514792Z
0 likes, 0 repeats
@yes @fluffy sometimes I wake up and have to wonder how come it's all still running, and how the heck our world isn't burning to the ground already...wait... :3:
(DIR) Post #A8SUs1dZ971WfePdgW by fluffy@social.handholding.io
2021-06-19T23:17:24.413594Z
0 likes, 0 repeats
@rozenglass @yes > My current position is that you _probably_ don't need k8s, ever, because one good machine or two will serve you well in most cases, if your system is well built and written. k8s and its ilk are mostly redundant re-implementations of tools that we already had on the OS-level for a long time.which tools do you mean?
(DIR) Post #A8SYchYjucq7sD6Tzc by rozenglass@anime.website
2021-06-19T23:59:25.949515Z
1 likes, 1 repeats
@fluffy @yes I liken K8s to an operating system, albeit a cluster-aware one, replacing Unix/Posix workflows. Many of the "cloud-native" tools surrounding it are re-implementing Unix/Posix tools that have been around for a long time.For example, when you use K8s you will probably want a logging system, and now you install fluent-bit and collect the logs to Loki, or some ElasticSearch database, why? We already had logging in Unix, and it worked! Log files were nice, and you could just grep over them and find anything you want. But the problem is that they are _not cluster-aware_ by default. You have 20 machines, now you have 20 log files, so you need something above, that unifies them all.Same for example for cronjobs. Kuberentes implements it's own cronjob mechanisms that should be used to schedule cronjobs that can run on any available node that meets your constraints, but we already had cronjobs in Unix. Problem is, again, they are not cluster-aware.Unix user accounts and permissions? Well, nope, now you have Kubernetes namespaces, roles, role-bindings, and access tokens. Unix mail-server sending you notification when errors occur? Nope, now there's a sentry node that collects events and statistics about errors from the swarm and sends you emails when failure rates reach some threshold. Unix `top` command to see resource usage? Nope, now there's a Prometheus time-series database that collects metrics, log-shipping daemons that send them, and a Grafana dashboards that display them. Using your Unix init-system to start services, and some daemontools to watch over them and restart them when they fail? Nope, now it is Kubernetes schedulers, reading YAML-defined API objects from a number of distributed etcd databases (after they vote and reach consensus).Want to make an LVM volume to give some disk space for your new application, no no no, now you have to make a Volume Claim, defining your requirements, and then nodes in the cluster will volunteer to serve your claim if they have attached disks that fit the request, by binding their volumes to your claim, otherwise a storage provider will provision new hardware for you according to your requested spec. Want to install an app from a maintained repo? .deb? .rpm? .pkg.xz??? no! Now you have a Tiller application chart server to which your Helm clients call to request new installations of applications described as Helm charts, that basically get fetched by Docker daemons, on the machine to which the workloads were scheduled, from a container image registry. etc. etc. you get the idea.The fundamental difference, in my view, between every Unix/Linux/Posix traditional tool that I mentioned, and the new "cloud-native" tools is that the Unix tools never considered the existence of other machines. The OS was about _this single computer_, never about other computers. The grand scales of operations we face today transcend the capabilities of any single computer, requiring this new "cluster-awareness", and requiring a remake of all fundamental Unix tools to deal with the fact that "You are not alone", but one lowly replaceable machine, among a fleet of tens of thousands.It seems that distributed systems, by their nature, have an extremely high-level of inherent complexity. This, along with huge funding, basically unlimited resources, a ton of scope-creep to fulfill crazy marketing promises, and hundreds of different teams of developers building their parts in relative isolation from each others, end up causing most large-scale distributed systems projects to become unwieldy ugly giant abominations. Few actually need such tools, few would benefit from such scale, but _all_ want in on the hype, and that is my main gripe.That, and my inherit personality trait of hating, hating!, complexity. I want my machines to serve me and mine, I want to be in control, not have them act out on their own by the rules of some google developer who thought that whatever he thought was a good idea.gotta go look at an alpine installation now for a few minutes to cleanse my soul :cirnoForReals:
(DIR) Post #A8SZcfGUXh5miC79xA by fluffy@social.handholding.io
2021-06-20T00:10:39.098885Z
1 likes, 0 repeats
@rozenglass @yes >etc. etc. you get the idea.actually i'm really enjoying these, and read all of them in detail. if you went on for a while i wouldn't mind the least bit. it's cool to see the two compared like that
(DIR) Post #A8SaACt0YoCuXh7m7s by pomstan@xn--p1abe3d.xn--80asehdb
2021-06-20T00:15:28.639157Z
0 likes, 0 repeats
@rozenglass @fluffy @yes k8s (and its ecosystem in general) is super complex, and has way way too many conceptsit’s ok to be a retard. we won’t judge
(DIR) Post #A8SaADNUjSas4FO7e4 by yes@social.handholding.io
2021-06-20T00:16:43.153215Z
1 likes, 0 repeats
@pomstan @rozenglass @fluffy you need to learn kubernetes RIGHT NOW https://youtu.be/7bA0gTroJjw
(DIR) Post #A8SfWsXCtQNPr1Ybke by pomstan@xn--p1abe3d.xn--80asehdb
2021-06-20T00:19:52.495339Z
0 likes, 0 repeats
@rozenglass @fluffy @yes That, and my inherit personality trait of hating, hating!, complexity. I want my machines to serve me and mine, I want to be in control, not have them act out on their own by the rules of some google developer who thought that whatever he thought was a good idea.those rules are written in blood and sweat of hundreds of SREs trying to fix shitcode at scale, a (((unix))) retard wannabe like you has no right to criticize them
(DIR) Post #A8SfWtDkLGP7yxSa3c by rozenglass@anime.website
2021-06-20T01:16:48.523670Z
1 likes, 0 repeats
@pomstan @fluffy @yes > a (((unix))) retard wannabe like you has no right to criticize themoh I'm not exactly criticizing them, I'm criticizing the misplaced obsession by smaller companies, in using them where their use is not warranted. Especially when the teams there are not exactly experts in the domain. Throwing docker and k8s and containers in just because "it's better" without actually realizing that they are trading simplicity for complexity without good benefits for them. I've seen enough teams completely paralyzed by their tools because they just brought in too many, and no one knows how to fix them when they break. It is literally my job, working with a number of client companies, to get them out of such ruts, as an SRE / DevOps consultant. Countless cases of unwarranted complexity brought in by inexperienced unrestrained hype-following devs paint my outlook on the matter.What is the solution for a bug in the system that causes it to go into an infinite loop, taking the whole service down? "Hello, can you setup containers for us on Amazon ECS? Containers can scale fast!" -- said the engineer that was my point of contact at my next to last gig.I already concede that large-scale systems have huge inherit complexity, and that Kubernetes, among other orchestration systems, are good attempts at fixing them. I would use Kubernetes again if I had to architect a large system with many teams working on it. I have nothing against its use in the right place.> a (((unix))) retard wannabeSorry. I guess you are projecting me onto some 4chan persona you have imprinted in your mind. I am not a Unix fan in the least, I'm even worse; a smug Lisp weenie :P Please do not lump me with a generic strawman persona and pass judgements on me like that. I don't _hate_ Unix, (I don't hate most things tbh), but I've read the Unix Haters Handbook :D> those rules are written in blood and sweat of hundreds of SREs trying to fix shitcode at scaleThanks for appreciating my work, and the work of my fellow SREs, we are rarely appreciated, even when we get a call at 2AM to remotely bring the system up again when the servers are _literally on fire_, and users of the system literally have to use it to plan their preparations for medial surgical operations in a few hours. I wrote my fair share of complex orchestration systems to deal with complex problems. It is by necessity. I just wish my fellow SREs would collectively focus on making a good distributed cluster-aware replacement for Unix, instead of wasting millions of collective man-hours trying to build new layers of abstractions on top of systems built on fundamentally incompatible paradigms, resulting in hybrid abominations that are, in many cases, more fragile than they could be.We are plumbers, and pardon my words, cleaning shit all the time. I respect my fellow plumbers, and their enormous efforts at cleaning shit. I just wish things weren't so shit in the first place.> of some google developer who thought that whatever he thought was a good ideaI have been bitten quite a few times by stupid decisions of Google (and Apple) developers. Sorry, not sorry. The majority of what they do is of acceptable, if not very high, quality, but like everyone else, they have their share of stupid nonsense. Stupid nonsense that has to be maintained forever for backward-compatibility. Everyone makes mistakes, the hundreds of bugs currently open against K8s are proof enough. But when I make mistakes, I have only myself to blame, and total control to fix it. When they make mistakes, I am left powerless, completely at their mercy, and I _will_ blame them.> That, and my inherit personality trait of hating, hating!, complexity.Note that I mentioned my personality here to point that it is a _personal preference_. I deal with complex systems all day everyday, I know complex systems are complex, I respect the work and collective experience of my fellow developers. BUT, their code is never touching _my_ personal non-production systems. Because when I go home I would like to admire something simple that works, and forget about all the pain I had to deal with because some developer doesn't understand that their query basically pulls in 200 GiB of data over the network to process them in memory, and that the machine doesn't have that much memory. I come home to simplicity to relax, until I get paged, and some client has their whole system down again.sorry if I get snarky sometimes, sometimes the frustration and stress gets to me.hope you have a nice day :cirnoSmile:
(DIR) Post #A8Sg8njvAWTBzGaSTA by rozenglass@anime.website
2021-06-20T01:23:40.829768Z
1 likes, 0 repeats
@pomstan @fluffy @yes Oh, it's not about me. It is about the developers who have to live with the possibility that I might not be available to help when things go wrong after my 3 months contract is up, or if I get hit by a bus, and they couldn't find a replacement for months. We are not the US, and finding SREs who know their stuff and don't cost more than the company's yearly budget, is hard.The more new concepts and tools I leave behind, for them to learn, the less likely they will be able to tweak and fix stuff safely on their own. Not because they are retarded, but because this is not their specialization, nor do they have the time to learn and understand it when something goes wrong.
(DIR) Post #A8TRIgvc2motbTH1Iu by IAmAWarCrime@poa.st
2021-06-20T10:12:07.206121Z
1 likes, 0 repeats
@pomstan @rozenglass @fluffy @yesImagine reading a whole-ass post that doesn't even trash the software, just bad execution with it and the only thing you can come up with is "what are you, a Linux nerd? Lel"