[HN Gopher] We reduced our server costs by moving away from AWS
       ___________________________________________________________________
        
       We reduced our server costs by moving away from AWS
        
       Author : caberus
       Score  : 388 points
       Date   : 2022-09-28 13:47 UTC (9 hours ago)
        
 (HTM) web link (levelup.gitconnected.com)
 (TXT) w3m dump (levelup.gitconnected.com)
        
       | jmyeet wrote:
       | This is unsurprising.
       | 
       | The point of AWS is to be flexible. You're paying for that. It's
       | easy to start. It's easy to stop. It's easy to change capacity.
       | 
       | Running your own servers is none of these things. But it is
       | cheaper at sufficient scale. You can't ignore the labor cost
       | (particularly engineering) however.
       | 
       | Where AWS shines is with highly volatile workloads. With your own
       | servers you have to provision for peak capacity. That's less the
       | case with AWS.
       | 
       | No shade on the author of course. It's great to read things like
       | this.
        
       | epberry wrote:
       | I believe this is the use case Cloudflare is really targeting
       | with R2. They recently connected Cache Reserve to R2 to make this
       | even easier. We wrote up a breakdown for S3 vs R2 and found that
       | R2 would be significantly cheaper when the majority of traffic is
       | cached data,
       | https://www.vantage.sh/blog/cloudflare-r2-aws-s3-comparison
        
       | thoop wrote:
       | Hi! I'm Todd, the solopreneur founder of Prerender.io and I
       | created that $1,000,000/year AWS bill. I sold Prerender.io to
       | Saas.group in 2020 and the new team has done an incredible job
       | growing and changing Prerender since I left.
       | 
       | $1M per year bill is a lot, but the Prerender back end is
       | extremely write-heavy. It's constantly loading URLs in Chrome in
       | order to update the cached HTML so that the HTML is ready for
       | sub-second serving to Google and other crawlers.
       | 
       | Being a solo founder with a profitable product that was growing
       | organically every month, I really didn't have the time to
       | personally embark on a big server migration with a bunch of
       | unknown risks (since I had never run any bare metal servers
       | before). So the architecture was set early on and AWS allowed me
       | the flexibility to continue to scale while I focused on the rest
       | of business.
       | 
       | Just for a little more context on what was part of that $1M bill,
       | I was running 1,000+ ec2 spot instances running Chrome browsers
       | (phantomjs in the early days). I forget which instance type but I
       | generally tried to scale horizontally with more smaller instance
       | sizes for a few different reasons. Those servers, the rest of the
       | infrastructure around rendering and saving all the HTML, and some
       | data costs ended up being a little more than 50% the bill.
       | Running websites through Chrome at scale is not cheap!
       | 
       | I had something like 20 Postgres databases on RDS used for
       | different shards containing URL metadata, like last recache date.
       | It was so write heavy that I had to really shard the databases.
       | For a while I had one single shard and I eventually ran into the
       | postgres transaction ID wraparound failure. That was not fun so I
       | definitely over provisioned RDS shards in the future to prevent
       | that from happening again. I think RDS costs were like 10%.
       | 
       | All of the HTML was stored in s3 and the number of GET requests
       | wasn't too crazy but being so write heavy on PUT requests for
       | recaching HTML, with a decent sized chunk of data, the servers to
       | serve customer requests, and data-our from our public endpoint,
       | that was probably 30%.
       | 
       | There were a few other things like SQS for populating recache
       | queues, elasticache, etc.
       | 
       | I never bought reserved instances and I figured the new team
       | would go down that route but they blew me away with what they
       | were able to do with bare metal servers. So kudos to the current
       | Prerender team for doing such great work! Maybe that helps
       | provide a little more context for the great comments I'm seeing
       | here.
        
       | floatinglotus wrote:
       | This is the Trillion Dollar Paradox described by Martin Casado.
       | You're crazy if you don't start your business in the cloud,
       | you're crazy if you stay there.
       | 
       | My new startup is focused on helping application owners
       | repatriate their workloads into their own infrastructure.
       | 
       | Our goal is to solve the network complexity challenges with a
       | fully open network stack (open source software with all of the
       | hardware options you would expect, and some you wouldn't). The
       | solution is designed to be turnkey and require very little
       | network experience. It will use standard DevOps tools that you're
       | already using.
       | 
       | We're announcing it in two weeks and will be posting info here on
       | HN!
        
         | JohnHaugeland wrote:
         | > This is the Trillion Dollar Paradox described by Martin
         | Casado. You're crazy if you don't start your business in the
         | cloud, you're crazy if you stay there.
         | 
         | That's not a paradox. You're not crazy in either case.
         | 
         | Starting in the cloud reduces costs for some strategies by
         | removing necessary engineering. Starting in the cloud increases
         | costs for other strategies by charging too much for commodity
         | offerings.
         | 
         | It's relatively straightforward to make the choice as soon as
         | you put a specific business in the crosshairs.
         | 
         | Edit: why is this downvoted?
        
           | gadflyinyoureye wrote:
           | You're going against the party line.
        
         | user3939382 wrote:
         | I've had tons of greenfield web apps with thousands of users
         | (not millions) and setting up a server was a problem I didn't
         | need help solving. I'm versed in AWS now but, I don't see any
         | benefit. The only big headache I had with server admin was
         | running email services/deliverability.
        
         | no_identd wrote:
         | What does your product offer over what Sovereign Cloud Stack
         | (See https://scs.community/) already offers?
         | 
         | (And before anyone falsely claims no commercial offers for this
         | exist yet:
         | 
         | https://www.plusserver.com/en/products/pluscloud-open [no
         | affiliation])
        
         | Melatonic wrote:
         | Very interested - you need people?
        
         | jamal-kumar wrote:
         | Damn that's really interesting thanks for sharing. Do you have
         | like an authorative link on this paradox?
         | 
         | I've been getting by on like 80$/month hetzner servers (Like
         | ancient 10 year old pizza box machines) for my businesses for a
         | long while. Never did the cloud thing, I can run like three or
         | four websites on one of those bad boys. I guess stuff like AWS
         | makes sense if your computational workload happens in bursts or
         | whatever? But 80$/month overhead costs are like nothing, I
         | spend more than that on a good night out level.
        
       | otabdeveloper4 wrote:
       | No kidding? You reduced your server costs by moving away from the
       | most expensive hoster on the planet? Good for you, I guess :)
        
       | taldo wrote:
       | When the cost of delivering your product/service is mostly
       | compute or traffic, sure, migrating off of AWS is a must once you
       | reach a certain scale. But for the other 99% [0], where
       | infrastructure is but a small cost, then think really hard if
       | you're willing to trade the engineering effort for the
       | convenience of managed cloud services.
       | 
       | [0]: or 90%, or 80% or who cares, but a majority of software
       | services seem to NOT be compute- or traffic- heavy.
        
       | joshstrange wrote:
       | They don't mention at all what services they were using (other
       | than slight mention of S3) which makes it very hard to respond to
       | this. If you are running everything on EC2 then you are going to
       | have a bad time (especially if you aren't using reserved
       | instances).
       | 
       | AWS (IMHO) shines with the various services they provide (S3,
       | Lambda, CloudFront, API Gateway, SQS< SES, to name a few). AWS is
       | a game of trying to reduce your bill and often that means using
       | AWS-specific services. If you want to stay completely "cloud
       | agnostic" you are going to paying more than buying into a
       | "cloud", in that scenario then you absolutely should be looking
       | at dedicated servers. AWS is great because you can bring existing
       | software and just run it in EC2 (or their container stuff if your
       | software is containerized) but the AWS magic comes from using
       | their managed services (also spinning up/down EC2 instances as
       | needed, but if you are running them 24/7 then consider
       | alternatives or at least pay for reserved).
        
         | [deleted]
        
         | twawaaay wrote:
         | Yeah, you need to do calculation of what you are doing right
         | now vs AWS.
         | 
         | If you already have a mature team that is doing well optimising
         | the environment, you have stable demand and no need to develop
         | things rapidly then it is very likely that going cloud is poor
         | choice.
         | 
         | As to "cloud agnostic", don't believe this bullshit. In my
         | opinion most projects fare much better by just letting go of
         | this "cloud agnostic" and just getting tied and locked in to
         | the vendor. It is not like there is a good chance Amazon (or
         | Microsoft or Google for that matter) will suddenly pull a
         | significant price hike or another move out of step with other
         | vendors.
         | 
         | Spending effort on trying to make your app "cloud agnostic"
         | usually ends up with far higher development costs and a chance
         | of failure for no benefit. Embracing one vendor in this case is
         | usually best way to achieve smaller, leaner app that is using
         | the platform well.
         | 
         | It is the same story as with SQL. Trying to use frameworks to
         | keep your app DBMS-agnostic but then nobody ever migrates their
         | apps to another DBMS. And for various good reasons: you hired
         | your staff for their expertise with X, they are used to it, so
         | now going for Y will usually be far higher cost than the
         | benefits.
        
           | KronisLV wrote:
           | > It is not like there is a good chance Amazon (or Microsoft
           | or Google for that matter) will suddenly pull a significant
           | price hike or another move out of step with other vendors.
           | 
           | I feel like a variety of other circumstances can come to
           | pass, which would negatively affect business continuity,
           | here's what a lazy search turned up:                 "My
           | Google Cloud was suspended too"
           | https://news.ycombinator.com/item?id=32571055       "Google
           | suspended our domain out of the blue"
           | https://news.ycombinator.com/item?id=32798368       "Tell HN:
           | Google Cloud suspended our production projects at 1am on
           | Saturday" https://news.ycombinator.com/item?id=32547912
           | "AWS account was permanently closed because it was suspended
           | for 90 days" https://news.ycombinator.com/item?id=31571538
           | (probably happens with Azure and other smaller platforms as
           | well, e.g. Hetzner, DigitalOcean, Vultr, Scaleway and so on)
           | 
           | That said, most people won't care to put in the work for a
           | multi-cloud/cloud-agnostic setup, since most projects just
           | aren't as important to warrant that much effort. And the ones
           | that are can probably also just talk with the cloud providers
           | through some representative anyways due to spending $$$.
           | 
           | I'd argue that it's good to build on common standards, like
           | OCI containers and the wire protocol of some common DBMS like
           | PostgreSQL or MySQL/MariaDB: so that you can replicate
           | certain parts of what a managed service does in a container
           | locally, for development and testing. In most cases it won't
           | matter that the managed cloud offering has some clever
           | engineering underneath it and scales better, as long as you
           | can check whether your SQL executes as it should and none of
           | your seeded/anonymized test data breaks.
           | 
           | I actually had this project with Oracle DB that ran horribly
           | when the instance was remote and you really couldn't do
           | migrations well without breaking things for anyone using it,
           | because the apps using it were written in a way where
           | hundreds if not thousands of SQL queries were done just to
           | load some data to display a page. Which is passable (if you
           | don't care) when the DB is running in the same data centre as
           | the apps, but absolutely horrible when these many smaller SQL
           | queries have the full network round trip between them,
           | especially when the app initializes data sequentially (N+1
           | problem). Ergo, the only way to work with something like that
           | is to setup a local database (e.g. Oracle XE) and work with
           | it, after importing a baseline migration/doing automated
           | migrations.
           | 
           | The same largely applies to any other DB with sub par
           | application architectures, as well as many other services
           | (e.g. MinIO/Zenko instead of S3, so you don't need to
           | actually upload 100 MB files to test whether attachment logic
           | in your app works, if you can run them locally).
           | 
           | As for software that should be compatible with multiple
           | DBMSes: sometimes it makes sense (e.g. something that could
           | be used by your customers in a self-hosted setup across
           | numerous different setups, like Zabbix or Nextcloud), but
           | most of the time it would negatively impact how easy it is to
           | develop code for your app (e.g. having to rely on ORM and
           | their abstractions like JPQL) and testing everything would
           | usually take more effort.
        
           | deltarholamda wrote:
           | >It is the same story as with SQL. Trying to use frameworks
           | to keep your app DBMS-agnostic but then nobody ever migrates
           | their apps to another DBMS. And for various good reasons: you
           | hired your staff for their expertise with X, they are used to
           | it, so now going for Y will usually be far higher cost than
           | the benefits.
           | 
           | Yeah, I never got this. If actual, for really real SQL is
           | necessary for your application, then you need an actual, for
           | really real DBA. If it's just a really nifty flat-file
           | interface that "just werks," go with whatever DB engine your
           | framework is most tested/used on.
           | 
           | I get that sometimes you're interfacing two different things,
           | one of which is on one DB engine, and you'd like to use the
           | same software. That's fine. But DB agnosticism has a cost
           | like anything else.
           | 
           | If you can be truly cloud-agnostic, then there is a high
           | probability you can be cloud-free.
        
           | speedgoose wrote:
           | Not being cloud agnostic forces you to use one single cloud
           | provider, that is not necessarily the best in everything.
        
             | twawaaay wrote:
             | It is a tradeoff. Whatever you choose will have its costs.
             | 
             | You need to know the tradeoff you are making but what I am
             | saying is that, in most cases, it costs more to spend so
             | much effort up front to be "cloud agnostic" than any
             | benefits of maybe being able to switch cloud environment in
             | a hurry.
        
               | speedgoose wrote:
               | I agree. I have switched cloud environments at work in
               | the past, and it required a lot of efforts. However we
               | switched towards standard Kubernetes to have more
               | flexibility in the future.
               | 
               | Also sometimes you don't want to minimise the cost but
               | use the best services or reduce the risks. Let's say I go
               | full on Azure, but after a year or two GCP offers a much
               | better product for my use case. I will have to ignore it
               | because it's too expensive to switch. Or if I go full on
               | GCP/Firebase but suddenly the pricing model changes and
               | it becomes overpriced. I will have to eat the cost for
               | some time.
               | 
               | However, if I'm careful to avoid vendor lock-ins from the
               | start, it will probably cost a bit more on average, but
               | the maximum cost is much lower and I don't risk being
               | stuck with a bad service.
               | 
               | In practice I will for example avoid to write code
               | specific to the proprietary Azure Blob Storage API, but
               | use an S3 compatible object storage instead. I will also
               | rather have an abstraction layer than using AWS SQS or
               | GCP pub/sub directly.
        
               | [deleted]
        
           | pooper wrote:
           | > It is the same story as with SQL. Trying to use frameworks
           | to keep your app DBMS-agnostic but then nobody ever migrates
           | their apps to another DBMS.
           | 
           | A small nit, I think this is partly because we've been burned
           | before. Same idea as backups, if you don't restore from
           | backups, you don't know if you have good backups. Similarly,
           | if you don't use certain code paths, you can't tell if those
           | code paths are sufficiently bug free.
           | 
           | I am thinking of Drupal. Pretty much everyone who uses Drupal
           | uses MySQL/MariaDB as far as I know. I think Drupal supports
           | Postgresql but nobody I know uses it because nobody they know
           | uses Drupal with Postgresql. I don't know anyone who uses
           | drupal with sqlite on production either.
        
           | minhazm wrote:
           | The SQL example is a great one. I worked on a project where
           | we used MySQL but then used SQLite for unit tests and
           | integration tests. We technically wanted to be able to swap
           | out the RDBMS to something like Postgres but in practice it
           | was never going to happen. Over time it started getting
           | complicated as some stuff we used in MySQL wasn't supported
           | in SQLite. Eventually I just ditched SQLite and changed our
           | testing to start a real MySQL server and destroy it after
           | testing is complete.
        
           | spoiler wrote:
           | > As to "cloud agnostic", don't believe this bullshit.
           | 
           | A notable exception to this is of you have clients whose IT
           | departments require you to work on specific cloud providers.
           | Although, there's tools that help abstract deploying
           | infrastructure to different cloud providers (eg terraform,
           | pulumi), but they still require some familiarity with the
           | providers. With all that said, overall I agree with your
           | sentiment
        
             | twawaaay wrote:
             | When you build software for a client that is a completely
             | different game.
             | 
             | I divide this into two completely separate areas:
             | 
             | * advisory -- you advise your client on what is prudent in
             | their circumstances,
             | 
             | * software development -- you produce software to client
             | specifications, regardless of what they asked.
             | 
             | You can advise them all you want but then you do what they
             | asked and that's it -- they are always right.
        
           | joshstrange wrote:
           | > As to "cloud agnostic", don't believe this bullshit.
           | 
           | > It is the same story as with SQL. Trying to use frameworks
           | to keep your app DBMS-agnostic but then nobody ever migrates
           | their apps to another DBMS.
           | 
           | I agree, I can't tell you how many hours I've wasted trying
           | to keeps something (theoretically) cloud or DBMS agnostic,
           | how many problems it's caused, and at the end of the day we
           | could never "easily" move to a different cloud or DBMS
           | without large rewrites.
           | 
           | I built on top of the Serverless Framework and I am
           | constantly kicking myself for doing so (eventually I'll move
           | off it for my main project). It's the worst of all worlds and
           | in the Serverless Framework docs they even have sections for
           | "GCF" vs "Lambda", if I have to write my
           | functions/declarations differently then why am I using your
           | layer on top? I know you get a few things for "free" with the
           | Serverless Framework but once your project grows you run into
           | all sorts of issues that are a PITA to work around. The truth
           | is I'm not leaving AWS Lamda for GCF or whatever Azure has, I
           | quite enjoy my "walled garden" and using an abstraction layer
           | only means I get the lowest common denominator or a headache
           | when trying to do something that is normally easy if you've
           | fully bought in.
        
             | guhidalg wrote:
             | But why would you try to build something cloud-agnostic to
             | run on the most cloud-aware tier: serverless? Computers
             | don't come forth from the ether, your code is always
             | running _somewhere_ and serverless means your provider
             | makes a ton of choices for you.
             | 
             | The lowest common denominator across clouds is a VM. If you
             | can run your applications and your databases on VMs, you
             | can run them on any cloud AND on-premises. If you can't run
             | CosmosDB or AWS Whatever on a VM, your choice of DBMS/ORM
             | doesn't affect that you're tied to that cloud provider.
        
               | galaxyLogic wrote:
               | I wonder could micro-services be the solution?
               | 
               | Run some of your micro-services on AWS and another on
               | Azure etc. A given micro-service then might be highly
               | dependent on AWS but the part of your application that
               | uses those micro-services would not know anything about
               | internals of AWS etc.
        
               | guhidalg wrote:
               | Perhaps if you're trying to get the best products from
               | each cloud, but I doubt this would be worth it. For each
               | cloud you are going to need to establish: accounts,
               | payment, access control, continuous deployment,
               | geographical locality of their DCs, distributed logging,
               | billing (and watch out for egress costs!), etc... I
               | cannot imagine the added complexity of doing this would
               | overcome the benefits of doing everything with one
               | provider.
               | 
               | But hey if your goal is to bill a client as much as
               | possible by claiming that you're using the best products
               | from each cloud, this sounds like a GREAT business idea.
        
           | ozim wrote:
           | Cloud agnostic has its place if you have enterprise contract
           | with cloud provider.
           | 
           | If you have 100s of projects cloud agnostic infra is good. If
           | for example Amazon buys your competitor and now you get "new
           | contract coming up".
           | 
           | That said I agree "most projects fare much better without" -
           | a lot of people don't understand they are not in position
           | where they would benefit from that.
        
             | twawaaay wrote:
             | The issue here is that the contract terms might change
             | iteratively by some percentage, while development costs are
             | exponential -- adding requirements to the project increases
             | cost exponentially.
        
         | outcoldman wrote:
         | I would assume they also do not include in the cost of 200k -
         | electricity, maintenance and human resources for managing their
         | infrastrucure.
        
         | onlyrealcuzzo wrote:
         | This is likely only true until they have enough lock-in on
         | those cloud products to really turn the screws and extract
         | maximum profits from people that are TOTALLY screwed by being
         | completely locked into proprietary AWS services...
         | 
         | This is basically the natural progression of SaaS...
        
           | dehrmann wrote:
           | Existing customers will flee as quickly as they can, and new
           | customers will look elsewhere. In reality, they'll never
           | charge more than their public rate card, and that rate card
           | has to be competitive with the competition.
        
           | pwinnski wrote:
           | You think AWS doesn't have enough lock-in now? And yet
           | maximum profits come from keeping customers, not driving them
           | away.
           | 
           | There's no such thing as a managed service so proprietary
           | that it can't be essentially cloned by another provider. I
           | remember people making the same claims about S3 long ago, and
           | yet there are multiple providers of S3-compatible file
           | storage services now.
           | 
           | Microsoft's Azure provides migration tools for people wanting
           | to leave Amazon's AWS, and they are not alone.
           | 
           | Amazon tends to be (in my opinion) a few years ahead of
           | everybody else, and different competitors focus on different
           | areas of competition, but migrating away from AWS isn't
           | really as hard as the "proprietary" label might make it seem.
        
             | throwaway894345 wrote:
             | Our organization moved from AWS to GCP. We were very
             | integrated in AWS, but the migration wasn't so bad, and
             | even with the benefit of hindsight I would never have
             | thought that we should have strived to be cloud agnostic.
             | Or rather, the best way to be "cloud agnostic" is to
             | architect your applications sensibly such that changing
             | from S3 to GCS is a relatively small, localized change in
             | each application. Also, having a good test suite really
             | helps to protect against regressions, and a culture of
             | documentation will help to make sure you know what things
             | to port over and what things are vestiges of some old
             | requirements and so on. Ultimately, moving cloud providers
             | wasn't that bad in my experience.
        
           | blackoil wrote:
           | This is often mentioned scenario but highly unlikely as
           | impractical it is.
           | 
           | First, AWS is already highly profitable, they aren't dumping
           | where this step becomes a necessity.
           | 
           | Also, any such move would dry up future growth from adding
           | new customers, which is suicidal.
        
             | hotpotamus wrote:
             | Is AWS profitable enough though?
        
         | hintymad wrote:
         | And likely a company who does not use cloud resorts to
         | Chef/Puppet/TFE and what not to manage their machines. Such
         | scripting tools may work for a small team as the engineers deal
         | with their specific needs day in and day out, but it is
         | expensive to scale out to larger teams. I'd assume only a few
         | engineers would enjoy writing hundreds if not thousands of
         | lines of yaml or the whatever so-called DSLs offered by the
         | aforementioned tools. Plus, it takes effort to implement
         | autoscaling, metadata management, image-building process like
         | AMI, security groups or something similar, availability zones,
         | and etc.
        
           | boltzmann-brain wrote:
           | There's a lot of other businesses that blindly do cloud for
           | no good reason at all. I calculated the forecast of aws vs
           | own infrastructure (aws is what was being used) for an ML
           | company doing major compute backfills. I did that at the
           | height of the GPU price craze and the whole move would have
           | still amortized itself after just 6 months. Why people stick
           | to aws for that sort of stuff is beyond me. If you ever need
           | to spin up VMs on aws rather than on your own metal, you can
           | always do it then. It boggles the mind that people want to
           | throw away millions to get worse performance and higher error
           | rates.
        
         | [deleted]
        
         | taylodl wrote:
         | Yes. If you use and manage AWS resources as you would your own
         | on-prem resources then you're not going to have a good time. As
         | soon as you think you need EC2 instances you need to re-think
         | your architecture. You're probably not using AWS most
         | effectively.
        
           | MajimasEyepatch wrote:
           | I think that statement about EC2 is a little too strong. You
           | definitely shouldn't be managing individual EC2 instances in
           | most cases, and you probably shouldn't be deploying directly
           | to EC2. But if you're running EKS (Kubernetes) or ECS
           | (non-k8s containers), then you're probably going to get more
           | bang for your buck with EC2 nodes than Fargate nodes,
           | especially if you have a large cluster.
        
             | taylodl wrote:
             | That's a good distinction. Maybe instead of saying avoid
             | EC2 what we should be saying is try to utilize AWS' managed
             | services whenever possible.
        
               | MajimasEyepatch wrote:
               | Yup, that's generally true. Otherwise, you really might
               | not be better off with AWS.
        
           | shudza wrote:
           | So what should you use for let say, a backend API service?
           | Don't tell me beanstalk/fargate/etc, because they're actually
           | more expensive.
        
             | joshstrange wrote:
             | Lambda is an option, it's been a very attractive one for
             | me.
        
               | jjav wrote:
               | AWS lambda is super cheap at tiny scale. But if you get
               | into any real constant load, it is way more expensive
               | than a VM.
               | 
               | The other drawback of lambda is the flip side of having
               | the server opaquely managed by AWS. It is so opaque you
               | can't debug anything. At one startup we had a weird
               | connectivity issue from lambda to RDS but it was
               | impossible to diagnose given the lack of access so it
               | went on for months.
               | 
               | Had it been running in a VM, I could've diagnosed that
               | within an hour with tcpdump and bpf et.al.
        
             | 300bps wrote:
             | Lambda, SQS and SNS.
             | 
             | That's how just about everyone does backend API services
             | with AWS. You'll be shocked at how cheap, scalable and
             | bullet-proof it can be.
        
               | PetahNZ wrote:
               | Unless your app is doing a lot of compute. We tried
               | moving our workers to lambda, it was 10x the cost vs an
               | EC2 auto scaling group
        
               | icedchai wrote:
               | The Lambda "developer experience" is mediocre at best.
               | Performance is often highly variable. And when you reach
               | a certain point, you'll get much better bang for the buck
               | with EC2. Everything can be cheap when you have very
               | little traffic.
        
               | cj wrote:
               | Until you reach scale, and then you'll typically end up
               | migrating off of those services onto EC2 where a $10k/yr
               | machine can process many orders of magnitude more
               | requests for the same price.
               | 
               | Lambda, etc, just doesn't scale well financially if
               | you're doing millions of requests per day.
        
               | taylodl wrote:
               | Then as the OP said, AWS isn't for you if you have a
               | constant 24/7 compute load. In my case the load is very
               | bursty. I sometimes do millions of requests per day, but
               | I typically don't. We lowered our TCO by 75% by moving
               | from on-prem to AWS and using Lambda. It really comes
               | down to your compute load profile.
        
           | ryanisnan wrote:
           | Using Karpenter for AWS has been a game changer. Set your
           | instance family designations, set up a compute savings plan,
           | and let it ride.
        
           | macinjosh wrote:
           | What I don't understand is why anyone would spend, precious
           | and expensive engineering resources on coding to a
           | proprietary API owned by one of the largest companies on the
           | planet to whom you are nothing.
           | 
           | Ok so you save money on the monthly bill but what happens
           | when Amazon decides they want to enter your market? What
           | happens if they decide your service is too controversial and
           | they kick you off?
           | 
           | If you were deploying your own services to EC2 instead of
           | using AWS's own services you could at least setup shop
           | elsewhere with just a bit of work.
           | 
           | To me it is antithetical to building a sustainable product.
           | People just hoping their startup gets bought and then it is
           | someone else's problem.
        
             | pooper wrote:
             | > To me it is antithetical to building a sustainable
             | product.
             | 
             | Continuing what you said, Google uses macOS and Windows for
             | client machines. That is ok because if Apple or Microsoft
             | were to decide to cut off Google altogether, Google would
             | have bigger problems.
             | 
             | However, I think from a business point of view, this is
             | similar to how so many businesses use cloudinary.
             | Cloudinary is a software as a service company that provides
             | cloud based image and video management services. What if
             | cloudinary kicks you out? There is a lot of custom code out
             | there to interface with cloudinary API. Where do we draw
             | the line between using Cloudinary is ok but Amazon aws
             | lambda is not?
        
               | macinjosh wrote:
               | I can't find the quote, but I think it was Joel Spolsky
               | on an old stackoverflow podcast with Jeff Atwood. They
               | were discussing why stackoverflow built their own servers
               | and did not use cloud or other hosting. He said something
               | like companies should fully own and control their product
               | and any direct dependencies thereof. If you think of it
               | as a concentric set of rings you at least want to have
               | full control over your ring and the ring below it. In the
               | terms of a web app it meant the code for your application
               | and the hardware it runs on. To me that seems reasonable.
        
               | RussianCow wrote:
               | That doesn't sound reasonable at all to me. Unless you
               | are building your own cloud service or doing something
               | out of the ordinary (for which cloud wouldn't be a good
               | fit anyway), the hardware you run on has absolutely
               | nothing to do with your business. Take the
               | shortest/cheapest path and move on.
               | 
               | To be fair to the Stack Overflow guys, AWS had fewer
               | offerings when SO was built, so the comparison wasn't the
               | same then as it is now.
        
             | sparker72678 wrote:
             | The alternatives are tremendously more expensive in both
             | time and money (and I'm thinking engineer salaries etc.
             | here, not just monthly service costs).
             | 
             | For most businesses the savings in money and time are worth
             | the risks you pointed out here.
             | 
             | If those risks are too high for you (and fair enough if
             | they are), you'll pay one way or another to avoid them.
        
               | pclmulqdq wrote:
               | Is your cloud ops department free? I have personally seen
               | that amazing sysadmins are a lot cheaper than passable
               | cloud ops people.
        
               | RussianCow wrote:
               | I don't think that's universally true. But even if it
               | were, you'd need far more people if you were to manage
               | the infrastructure yourself than to just use AWS.
        
             | joshstrange wrote:
             | I mean most of my projects are not massive, they cater to
             | up to ~10K or so people. Running EC2 or dedicated server
             | somewhere else would be a lot for me, a single developer,
             | to take on myself. However using Lambda, API Gateway,
             | CloudFront, SQS, DynamoDB, S3, and more (sometimes also
             | with PlanetScale for my DB) is a really nice stack and the
             | costs are tiny (I'm talking <$30mo in my busy months and
             | <$10 in my slower months, I write event-based software so
             | it's very burst-y).
             | 
             | To run an equivalent stack myself would require significant
             | effort (upfront and ongoing) and I'm not sure I could get
             | the costs that low. Another great aspect of my stack is
             | it's all scales on it's own without any intervention on my
             | part.
             | 
             | What you are comfortable using is a sliding scale with some
             | people thinking if you aren't hosting it in your own
             | datacenter then you are crazy all the way up to "let's just
             | use Firebase" (or similar). For me I've found that buying
             | in fully can be very enjoyable and the risk is rather low
             | (for me).
        
             | michaelcampbell wrote:
             | > What I don't understand is why anyone would spend,
             | precious and expensive engineering resources on coding to a
             | proprietary API owned by one of the largest companies on
             | the planet to whom you are nothing.
             | 
             | > Ok so you save money on the monthly bill...
             | 
             | Sounds like you do understand it.
        
             | pwinnski wrote:
             | > Ok so you save money on the monthly bill
             | 
             | Right, you understand, but maybe you don't _really_
             | understand?
             | 
             | If you work at a company large enough, and with enough
             | DevOps, NetOps, Ops people to go it alone, and can get
             | favorable lease terms on hardware, then by all means, host
             | it yourself! Paying all of those people and _also_ paying
             | for AWS seems odd, for sure.
             | 
             | But if you work at a company without a deep bench in
             | various Ops groups, then good news! You don't have to hire
             | them! Instead you can rely on Amazon's deep bench, and
             | focus your salary budget on developers rather than Ops
             | people.
             | 
             | There are scales at which, even not counting enormous
             | budget expenditures on staff rather than managed services,
             | AWS doesn't make sense. And if you're just using EC2,
             | you'll hit those numbers more quickly than if you make good
             | use of things like Lambdas and SNS/SQS and Dynamo and so
             | on. The cost savings _can_ be enormously staggering, and
             | the more you lean into AWS managed services, the more that
             | 's true.
             | 
             | We see the edge cases on HN, the rare instances of someone
             | being kicked off of somewhere, but for most people, the
             | uptime and long-term reliability of AWS managed services is
             | higher than trying to go it alone.
        
               | 60secs wrote:
               | If you need to hire 5-10 engineers to save $1MM, you
               | didn't save anything.
        
               | pwinnski wrote:
               | Right, which is why I can't imagine moving from AWS to
               | self-hosting. Having to hire a full set of Ops people to
               | manage what AWS is managing for me would hurt.
        
               | jjav wrote:
               | How are you running AWS without people?
               | 
               | Our AWS DevOps team isn't any smaller than they'd be if
               | they were running a handful of servers in a colocation
               | facility.
        
             | MajimasEyepatch wrote:
             | > What I don't understand is why anyone would spend,
             | precious and expensive engineering resources on coding to a
             | proprietary API owned by one of the largest companies on
             | the planet to whom you are nothing.
             | 
             | Who cares if it's proprietary? You're either locked in to a
             | proprietary stack or you have to go out of your way to
             | cobble together a FOSS stack on bare metal, in which case
             | you're essentially locked into that.
             | 
             | > Ok so you save money on the monthly bill but what happens
             | when Amazon decides they want to enter your market?
             | 
             | If AWS helps me get to market faster and cheaper than the
             | alternatives, I don't particularly care what the rest of
             | Amazon does.
             | 
             | > What happens if they decide your service is too
             | controversial and they kick you off?
             | 
             | Normal businesses don't have to worry about this. I've
             | honestly never heard of anyone getting kicked off of AWS
             | unless they were doing something extremely sketchy.
             | 
             | > If you were deploying your own services to EC2 instead of
             | using AWS's own services you could at least setup shop
             | elsewhere with just a bit of work.
             | 
             | True, if this is truly all you're doing, you can probably
             | find better options elsewhere at this point. But pretty
             | soon, you're going to be interesting in managed solutions
             | for other things that are important but not core to your
             | business's value proposition, things like managed databases
             | and logs and auditing and object storage and containers and
             | data lakes.
             | 
             | If you don't want to use the cloud services that actually
             | make the cloud convenient to use, then yeah, I don't know
             | why you'd use the cloud.
             | 
             | But you could also run, say, Kubernetes on EKS and Fargate
             | and still have a relatively easy time porting your software
             | elsewhere in the future. And the other cloud providers have
             | their own versions of things like S3 and Lambda and RDS.
             | It's never easy to port production software to another
             | provider, but it's not like it's impossible. And it's also
             | an extremely unlikely scenario. I'd love to see some
             | statistics on how many companies actually move off of their
             | cloud providers; it's got to be one of the stickiest
             | businesses out there.
             | 
             | > To me it is antithetical to building a sustainable
             | product. People just hoping their startup gets bought and
             | then it is someone else's problem.
             | 
             | At this point, just about every Fortune 500 company is
             | using AWS, Azure, or Google Cloud in some capacity. For
             | most companies, building out your own infrastructure is not
             | part of your value proposition. So why not just pay
             | somebody else to do the undifferentiated heavy lifting of
             | figuring out how to run code in a generic way?
        
         | shepardrtc wrote:
         | Going from Alooma (pre-Google acquire) to DMS saved us
         | thousands. I was shocked when I read that the only thing that
         | DMS cost was running a small EC2 instance. It's a fairly good
         | service - it mostly just runs. AWS has so many offerings like
         | that. They really want to make sure you stick with their
         | ecosystem, and it pays to do so.
        
         | bachmeier wrote:
         | > which makes it very hard to respond to this
         | 
         | It's possible that they are not interested in a response. They
         | are saying what worked for them, whether or not others want to
         | agree.
        
         | dinobones wrote:
         | All of those services are poisonous.
         | 
         | They are awful to develop against, awful to test against, and
         | often times "just work" until they don't.
        
         | GoblinSlayer wrote:
         | I bet Amazon doesn't regret that they sold more than the
         | customer needed. Amazon wants to eat.
        
         | datalopers wrote:
         | EC2 is vastly more cost efficient than using all of those
         | managed services unless you only have tiny on-demand usage
         | scenarios.
        
           | andy_ppp wrote:
           | Are you including the wages of the extra people to support
           | building and managing such services?
        
             | datalopers wrote:
             | extra people? no because I hire generalists actually
             | capable of building software, not overpaid kids only
             | capable of gluing aws services together.
        
               | etothepii wrote:
               | How much of your developer time gets lost to maintaining
               | servers.
               | 
               | I used to think this sort of thing was a valuable use of
               | my time but I now have over $200k in annual revenue and
               | we pay less that $5 a month in raw compute (and that's
               | nearly all S3). My Co-founder is always worried about the
               | cost of our AWS but so far we just haven't witnessed it.
               | 
               | My mandate has always been the price should be able to
               | scale to 0. So no EC2, has made building some longer
               | running tasks a bit more complicated, but that's just
               | because we didn't know how to do it before we started.
        
               | datalopers wrote:
               | Maintaining servers? Nah, you deploy a container and
               | don't think about it. ECS is completely free to use.
               | 
               | Anyway, you have a tiny on-demand business, as I caveated
               | above. You could run it just fine on a nano ec2 instance
               | if you needed, which is $2-3/mo.
        
               | moonchrome wrote:
               | The more I work with cloud the more I see this 'saves
               | time on infrastructure' being a half-truth at best - we
               | still have overwhelmed DevOps people who, unlike managing
               | on-prem services, have 0 insight or control over what
               | goes on inside AWS. And the services require a decent
               | amount of hand holding, proprietary know how, etc.
               | 
               | In the end you replaced sysadmin with DevOps and got up
               | charged multiples.
        
               | zo1 wrote:
               | Agreed, devops (and whatever the hell cloud wranglers are
               | being sold as today) is the new priesthood. It's meta-
               | level job creation and job security.
        
               | acdha wrote:
               | It's certainly possible to overspend on cloud services
               | but in most cases when I see comparisons people tend to
               | forget to fully include their true costs for things like
               | staff time, infrastructure, etc. and especially things
               | like opportunity cost for the delays caused by
               | provisioning infrastructure, less capable interfaces
               | (e.g. if you're swapping Terraform for a Jira ticket), or
               | the technical decisions people make because they have
               | fewer services available (this could be a Lambda function
               | but we don't have that so now we need to manage a full
               | VM).
               | 
               | Some examples which come to mind:
               | 
               | * Comparing S3 to the on-premise tape system but ignoring
               | the fact that it involved an expensive tape robot, DR had
               | access times measured in days, etc.
               | 
               | * Comparing S3 to on-premise storage, ignoring the
               | difference in redundancy, forcing users to handle bitrot
               | at the application level, and the procurement process
               | meaning that when they ran out of storage it took months
               | of telling people they couldn't allocate more.
               | 
               | * Saying their devops engineer cost twice as much as
               | their sysadmins (true) but then when you look the devops
               | engineer is using automation and managing literally a
               | hundred times more systems than the "cheaper" ops team.
               | 
               | * Saying their cost to run a VM was cheaper than EC2,
               | which true if you looked only at the instance but not
               | when you calculated how much they were spending on
               | underutilized VM hosts, power / HVAC, facilities people,
               | etc.
               | 
               | It's totally possible to beat a major cloud provider on
               | costs[1] but you usually need to be operating at fairly
               | large scale to even approach the break-even point. This
               | is especially true when you have regulatory or policy
               | requirements for things like security and you include the
               | cost of monitoring all of the things which fall under the
               | cloud provider's responsibility -- management networks,
               | firmware version management, robust logging and IAM, etc.
               | are all easy to accidentally exclude when making
               | comparisons.
               | 
               | 1. Network egress as the most obvious area to attack
        
               | throwaway894345 wrote:
               | You still have to pay your generalists for the time they
               | spend writing and maintaining that bespoke software, and
               | moreover you can hire competent developers to develop
               | using cloud services. There's no reason to assume that
               | cloud services require incompetent developers, and indeed
               | the cheapest solution is often to employ competent
               | developers to integrate cloud services (whose development
               | and operations costs are shared among all customers,
               | whereas your company is footing the whole bill for your
               | bespoke services).
        
           | acdha wrote:
           | That's definitely not true for "all of those managed
           | services" -- you can't replicate CloudFront on EC2, for
           | example, and it'd be extremely unlikely that you could build
           | your own S3 replacement on top of EC2 storage without
           | spending considerably more on operations than you save. On-
           | premise storage is cheaper but you're still going to struggle
           | to see a cost savings unless you either don't care about
           | reliability or availability and/or buy your data by the
           | petabyte.
           | 
           | For the other services, it's still going to depend heavily on
           | your usage, staff time, and operational efficiencies. For
           | example, if you replace SQS with RabbitMQ you need to manage
           | multiple EC2 servers for reliability but those servers might
           | be significantly underutilized based on your traffic levels.
           | Whether or not you save money depends on how much you pay
           | your operators and how many messages you use: if you use less
           | than the free tier's million messages per month, there's no
           | way to beat it. Each million requests costs $0.40 or less, so
           | if you pay your ops person $50k annually (haha) you'd need to
           | be somewhat over 120 million messages per month to pay for
           | them to spend a single hour on O&M even before factoring in
           | your EC2 usage.
        
           | etothepii wrote:
           | Or any kind of demand spikes.
           | 
           | If you are using lambda and suddenly need to run a million
           | lambda you just can. If you want to have the ec2 support to
           | run a million lambda at some time you are going to pay for a
           | lot of sleeping computers.
        
             | pclmulqdq wrote:
             | If your spikes go 5x over your base load, EC2 is still
             | cheaper. Even if your load spikes are 10-20x over your base
             | load, EC2 can still be cheaper. Past that, lambda is your
             | better bet. If you are comparing to Hetzner or OVHCloud,
             | you those numbers are 25x (definitely cheaper) and 50-100x
             | (possibly cheaper).
        
           | throwaway894345 wrote:
           | Maybe if you ignore the costs associated with operating EC2
           | or if you have a steady, predictable workload. Or perhaps if
           | you have enormous (on the order of FAANG) scale to absorb
           | those operational costs.
        
             | devonkim wrote:
             | It gets odd in this distribution of potential customers
             | because once your scale gets large enough again you're more
             | than capable in theory of provisioning your own
             | datacenters. MS, Google, and Amazon all have major public
             | clouds while Apple and Facebook / Meta are deploying their
             | own bare metal for their core infrastructure and some trace
             | usage of the others most likely for compatibility reasons.
             | Only Netflix is basically zero-DC in the new school
             | infrastructure sense and even they used to have bare metal
             | for old infrastructure like Oracle servers and Big Iron for
             | I think accounting until so long ago.
             | 
             | The real target customer IMO for these big public clouds
             | are non-technical corporations whose primary revenue and
             | personnel competencies aren't derived from mastery of
             | software but primarily around other verticals like
             | agriculture, manufacturing, etc. Insert Werner Vogels'
             | criticism of the HBR article emphasizing focusing upon core
             | competencies and outsourcing non-core competencies like IT
             | (read: because technology is so important to every business
             | at scale now).
        
       | hazmazlaz wrote:
       | There is no way this figure is accurate. The annual spend cited
       | of $1,000,000 is purely hypothetical, as admitted here:
       | 
       | "However, all this data and processes need to happen on a server
       | and, of course, we used AWS for it. A few years of growth later,
       | we're handling over 70,000 pages per minute, storing around 560
       | million pages, and paying well over $1,000,000 per year.
       | 
       |  _Or at least we would be paying that much if we stayed with
       | AWS._ Instead, we were able to cut costs by 80% in a little over
       | three months with some out-of-the-box thinking and a clear plan.
       | "
        
         | pessimizer wrote:
         | > There is no way this figure is accurate.
         | 
         | You've got the FUD covered, but you also need to add at least
         | some substance to your claim. How do you know this figure would
         | not be accurate? Why is your (hypothetical, not offered)
         | estimate better than the author's?
        
           | merb wrote:
           | well the problem with the article is basically that they left
           | out a lot of important detail, like which bare metal servers,
           | how many, where do they host now, did they use cloudfront
           | what about cloudflare did they use an edge cache? what about
           | reducing costs by killing stopping unneeded resources? a lot
           | of their workload looks dynamic. it's also fishy what they
           | wrote here:
           | 
           | > After testing whether Prerender pages could be cached in
           | both S3 and minio, we slowly diverted traffic away from AWS
           | S3 and towards minio.
           | 
           | if they served directly from s3 that would be, stupid?
           | 
           | > In the last four weeks, we moved most of the cache workload
           | from AWS S3 to our own Cassandra cluster.
           | 
           | is also strange. it misses a lot of detail but it does not
           | look like they just migrated away from s3...
           | 
           | (looks like their new hoster is hetzner, from
           | service.prerender.io )
        
             | pessimizer wrote:
             | > well the problem with the article is basically that they
             | left out a lot of important detail,
             | 
             | The problem with your comment is that you insisted that
             | they gave you enough detail to definitively determine that
             | they were either lying or mistaken.
             | 
             | > There is no way this figure is accurate.
        
       | throwaway20221 wrote:
        
       | 0xbadcafebee wrote:
       | They saved $800K on their AWS bill, but                   - may
       | spend $250K on servers, replaced after 3 years becomes $83k/yr
       | - may spend $120-250K on extra staff to maintain the
       | infrastructure         - may spend $15K for a cage in a DC
       | 
       | They still save $452K/yr overall (actual savings 1st year only
       | $285K). It's still a savings for sure, but always keep TCO in
       | mind.
       | 
       | The real fun comes later when you outgrow your cage and there's
       | not enough space left in that DC, or they just have shitty
       | service constantly knocking out your racks, and you have to
       | consider splitting your infra between DCs (a huge rewrite) or
       | moving DCs (a huge literal lift and shift). Have been part of
       | both, it's... definitely a learning experience.
        
       | theptip wrote:
       | Perhaps I'm missing it in the OP -- I don't see any mention of
       | what they actually moved to. CoLo? VPS? On-prem?
       | 
       | This seems like a key detail when telling people about your
       | migration off AWS.
        
         | jacooper wrote:
         | On-prem
        
       | [deleted]
        
       | P5fRxh5kUvp2th wrote:
       | I'm glad to see more of these types of articles, but at the same
       | time I'm a bit flabbergasted that this isn't obvious for so many
       | people.
       | 
       | These cloud providers are, by definition, charging you more than
       | it would cost you to run it yourself. What you get in return is a
       | guarantee of expertise and an ecosystem.
        
         | Octoth0rpe wrote:
         | > These cloud providers are, by definition, charging you more
         | than it would cost you to run it yourself.
         | 
         | That is _not_ a given. They're charging you more than it costs
         | _them_ to run it.
         | 
         | They get:
         | 
         | - much lower hardware prices than you
         | 
         | - lower bandwidth prices than you
         | 
         | - likely lower electricity costs than you
        
           | Marazan wrote:
           | Aws bandwidth costs charged to consumers are a notorious
           | ripoff.
        
           | rglullis wrote:
           | That is true of any datacenter, but any offering from
           | AWS/GCS/Azure/Digital Ocean/Vultr is an order of magnitude
           | more expensive than Hetzner/OVH/Scaleway clouds, and _orders_
           | of magnitude more expensive than if I just get a handful of
           | dedicated servers and run my own Minio servers (to replace
           | S3) and manage my own databases and build my own OpenStack
           | /k8s/Nomad/Swarm cluster.
        
           | AtlasBarfed wrote:
           | AWS's bandwidth charges are highway robbery.
           | 
           | It's my impression that AWS competes with the EC2 instance
           | cost (as it that's what new customers look at), and the
           | bandwidth/storage only becomes apparent when you are locked
           | in.
           | 
           | Really what AWS should be is one or two phases of service
           | maturity: dev infrastructure and experimentation is phase
           | one, phase two would be the "its a couple servers"
           | production, BUT: with a scale plan for phase 3 being not-AWS.
           | 
           | Having a mature/battle tested phase 2 --> phase 3 should be a
           | market advantage in the modern business landscape, but that's
           | also a post-acquisition/exit phase, so none of the presumed
           | target HN crowd care about it.
           | 
           | There are extensive articles and public knowledgebases on
           | various technologies and architectures, but there is
           | effectively nothing on AWS independence out there, even
           | though almost every mature organization will need to face it.
        
             | hotpotamus wrote:
             | I'm also a bit blown away that anyone is surprised that AWS
             | is expensive. Sure you can express a lot of the costs in
             | small numbers per hour or $0.08 per gigabyte of bandwidth,
             | but those things all add up quickly at scale.
             | 
             | I assumed phase 3 was where AWS started giving you massive
             | discounts because you represent so much business for them,
             | but I've never been at a place big enough to swing that
             | stick at them.
        
               | tablespoon wrote:
               | > I'm also a bit blown away that anyone is surprised that
               | AWS is expensive. Sure you can express a lot of the costs
               | in small numbers per hour or $0.08 per gigabyte of
               | bandwidth, but those things all add up quickly at scale.
               | 
               | Could it be magical thinking based on marketing hype and
               | autoscaling fantasies?
        
           | moqmar wrote:
           | That is why VPS providers are cheaper than running your own
           | hardware, at least assuming you're not always fully utilizing
           | it. Cloud providers on the other hand, they don't really sell
           | the resources - they mainly sell convenience, and that seems
           | to be worth a lot.
        
             | P5fRxh5kUvp2th wrote:
             | VPS is virtualized, of course it's cheaper than running
             | your own hardware.
        
           | pclmulqdq wrote:
           | The whole idea that since they have economies of scale, they
           | must pass those savings on to you is a misunderstanding of
           | how companies work.
           | 
           | Realistically, a company will charge slightly less than
           | whatever your alternative is, and they will provide products
           | that restrict the number of alternatives you have.
           | 
           | AWS, GCP, and Azure provide several differentiated products
           | that lock you in to higher prices on everything else.
           | Effectively, these three have an oligopoly on "highly
           | differentiated cloud services." They are only competing with
           | each other on those services, and they are competing with
           | "servers plus ingress/egress costs to our differentiated
           | services" on their commodity products. That is the real
           | reason why AWS egress costs are so high. It prevents you from
           | picking and choosing where to buy each part of your cloud
           | footprint, and locks you into buying AWS. Bandwidth costs are
           | what keep you inside the AWS/GCP/Azure walled garden.
           | 
           | Lower-end providers like Linode, DigitalOcean, Vultr, and
           | Cloudflare have parts of the differentiated offering, but not
           | all of it. These people will have lower prices than
           | AWS/GCP/Azure since their offering is less differentiated,
           | but they will still charge you more than you would pay by
           | renting a server, since they offer more products.
           | 
           | Finally, hardware rental providers like Hetzner and other
           | operators are competing directly with you buying the hardware
           | and paying for datacenter space and bandwidth. Datacenters
           | typically charge a premium for power, even when they are in
           | areas with low-cost power.
           | 
           | Notice that none of these companies are competing with
           | _large-scale_ server buyers who have the same hardware
           | /bandwidth/power costs that they do. As such, they do not
           | _need_ to pass the savings they get onto you. That is where
           | they get their profit.
        
             | pessimizer wrote:
             | > The whole idea that since they have economies of scale,
             | they must pass those savings on to you is a
             | misunderstanding of how companies work.
             | 
             | That idea was not expressed in the comment that you replied
             | to. The comment you replied to was in response to a strong
             | claim that ignored the existence of economies of scale, and
             | therefore concluded that _it 's so obvious_ that cloud
             | providers must always be more expensive than self-hosting.
        
               | P5fRxh5kUvp2th wrote:
               | That's just not how this works.
               | 
               | Dell can bulk order parts and has a strong negotiating
               | stance, which makes the hardware __CHEAPER FOR THEM__.
               | 
               | It says nothing about how much the market can bear for
               | Dell. And in fact, companies use Dell because of the
               | _guarantees_, not the cost. At any point in time, Dell
               | will replace hardware with no questions asked. They can
               | do this because of the margins between what it costs them
               | and what it costs you.
               | 
               | There are benefits to cloud, cost aint one of them.
        
               | pessimizer wrote:
               | You're arguing with a position that no one is taking. I'm
               | a cloud-to-butt guy, and have no love for AWS or cloud. I
               | will probably always run my own servers because I spent a
               | lot of my life learning how to do it. I'm annoyed that
               | I'm telling you that as a way to try to trick you into
               | reading the comments that you're replying to.
               | 
               | edit:
               | 
               | Also, this shit?
               | 
               | > That's just not how this works.
               | 
               | It never makes you sound right.
        
               | pclmulqdq wrote:
               | At this point, it should be pretty clear that self-
               | hosting is cheaper, given all of the examples we have
               | seen.
               | 
               | AWS's economies of scale have as much to do with its
               | pricing as the phases of the moon, and citing them as a
               | reason why AWS could be cheaper than self-hosting is
               | pretty ignorant in itself.
        
               | pwinnski wrote:
               | On the contrary, self-hosting is only cheaper under some
               | circumstances, which is why when those circumstances are
               | met, it becomes a story worth a post on HN.
               | 
               | Whether economies of scale, or the efficiency of managed
               | services at scale, or use-based loss leaders, there are
               | many ways in which AWS services could be, and often are,
               | cheaper than self-hosting. Of course, not always.
        
               | pclmulqdq wrote:
               | In my experience, it is almost always not cheaper to use
               | AWS services, unless your workload is exceptionally
               | bursty in an unpredictable way or fits entirely in the
               | free tier. Pretty much nothing at AWS is a loss-leader
               | except the free tier. Also, if your workload is small and
               | you don't need to hire cloud ops folks, you can come out
               | ahead on TCO.
        
               | pwinnski wrote:
               | So we have gone from a categorical "not cheaper" to
               | "almost always not cheaper," except for two
               | circumstances, no, three. Progress!
               | 
               | At my last company, providing SaaS for the education
               | market, moving from a datacenter to AWS saved almost 70%
               | year-over-year. In the datacenter, we ran machines to
               | cover our peak load, which only happened a few times a
               | year. In AWS, we scaled _way_ down and auto-scaled up
               | during those peak weeks.
               | 
               | Is the entire education market "exceptionally bursty?" I
               | suppose it could be considered so. Bar exams and midterms
               | and finals certainly don't happen every week.
               | 
               | I'll wait for people to tell me how AWS was the wrong
               | solution, and how we did everything wrong before that,
               | but the bottom line is we saved a _lot_ of money,
               | accelerated our development schedule by building all new
               | functionality using the so-called  "serverless" stack,
               | and succeeded so well during the pandemic that another
               | company acquired us... which is why it's my former
               | company.
        
               | pclmulqdq wrote:
               | It sounds like you made a good decision, and yes, it
               | sounds like your entire market (particularly the parts
               | around exam hosting) is exceptionally bursty. I worked
               | with some folks at a university on some of their IT
               | systems, and they hit huge traffic spikes (literally
               | 1000x their base load) around course registration and
               | final exams. Their solution was to put 100 gbps NICs on
               | their Oracle server...
               | 
               | I don't know how big your peak was compared to your
               | average load, but if it was anything like that,
               | serverless was a great call. Buying enough hardware for
               | the bar exam and idling it 99.5% of the time sounds
               | incredibly wasteful. However, this is an exceptionally
               | bursty workload.
               | 
               | Most services do not have this level of traffic
               | variability in such a way that a CDN can't handle it for
               | you. 10x peak to trough variability (after your CDN) is
               | fairly common, but still considered bursty, and in that
               | case, AWS serverless still doesn't look great compared to
               | DO droplets. Many services have daily or weekly cycles
               | (without events like the bar exam), and run analytics
               | workloads in their off hours.
        
               | pessimizer wrote:
               | Again. This was a reply to a strong, clearly expressed
               | claim. You can make up other related claims to argue
               | with, but you should find somebody actually making them
               | in order to be more effective.
               | 
               | It is not obvious that growing your own wheat will always
               | be cheaper than buying bread.
        
               | pclmulqdq wrote:
               | It's interesting that you chose a competitive market to
               | compare this with. In a competitive market, efficiencies
               | go to the customer. In an oligopoly or monopoly,
               | efficiencies go to the provider.
               | 
               | It's obvious that baking your own bread is a bad idea if
               | you're okay with what you can buy from the competitive
               | market. It's equally obvious that baking specialty,
               | artisanal bread (and buying the commodity ingredients,
               | such as flour, salt, and yeast) is cheaper than going
               | around to several grocery stores to try to find the exact
               | kind of loaf you want.
               | 
               | AWS is in an oligopoly, so it should be obvious that its
               | efficiencies (in this case, its economies of scale) are
               | not passed on to the customer. The strong, clearly
               | expressed claim was fine.
        
             | jstanley wrote:
             | > The whole idea that since they have economies of scale,
             | they must pass those savings on to you is a
             | misunderstanding of how companies work.
             | 
             | Your parent comment wasn't saying that AWS is necessarily
             | cheaper than running hardware yourself. It was saying that
             | AWS is _not_ necessarily _not_ cheaper.
        
             | JohnHaugeland wrote:
             | > The whole idea that since they have economies of scale,
             | they must pass those savings on to you is a
             | misunderstanding of how companies work.
             | 
             | You're responding to someone who said "can," not "must."
             | 
             | What they said was "there's no requirement that cloud costs
             | more than traditional hosting. Their scale allows them
             | lower fundamental costs. They could outcompete traditional
             | hosting on cost if they wanted to."
             | 
             | There's no misunderstanding of how comapnies work in what
             | parent poster said; in reality they are 100% correct, and
             | this is the path that CloudFlare is beginning to follow
             | with services like workers and R2.
        
               | pclmulqdq wrote:
               | R2 and workers are absolutely not cheaper than doing it
               | yourself. They are cheaper than AWS, but both of those
               | services are "value-added" from AWS, and as such have a
               | 20-100x markup.
        
               | JohnHaugeland wrote:
               | R2 is radically cheaper than doing it yourself
               | 
               | It's a CDN, not a filestore. CDNs mean hundreds of
               | locations.
        
               | pclmulqdq wrote:
               | R2 is an object store. It is literally a drop-in
               | replacement to S3.
               | 
               | Cloudflare is a CDN, which, like all of its competitors,
               | is a lot cheaper than doing it yourself because _CDNs are
               | a competitive market_. There are at least 10 (if not
               | more) CDN providers out there that have footprints in
               | several thousand datacenters, and the product is
               | completely undifferentiated. In a competitive market, you
               | have to charge related to your costs, because your
               | competitors will cut prices to their costs.
               | 
               | In other words, in a competitive market, economies of
               | scale get passed on to consumers. In monopoly and
               | oligopoly markets, that is not the case.
               | 
               | Object stores are becoming a competitive market, with R2,
               | B3, and Wasabi entering the game, but buying your own
               | machines and using Ceph is still cheaper. You still pay
               | for "value adds" like not having to manage the machines
               | yourself.
        
               | JohnHaugeland wrote:
               | > R2 is an object store. It is literally a drop-in
               | replacement to S3.
               | 
               | A CDN is a thing where you can put things, and request
               | them by web, and have them geo-distributed.
               | 
               | R2 is actually a CDN. So is S3. Yes, I'm aware, the
               | vending companies also sell different products called
               | CDNs. Nonetheless, R2 and S3 (unlike B2) are CDNs. You
               | can make distributed hits to it across the planet to
               | addresses you don't control. You can hit it directly from
               | the web, and it will serve to you from a local node you
               | never knew anything about. Its goals are speed and
               | locality. You can look up programmers debating whether to
               | use CloudFront or S3 as their CDN based on time to
               | invalidate vs actual speed and cost control.
               | 
               | Attempting to describe it as "an object store instead of
               | a CDN" is kind of missing the point. It's both. It's also
               | a webserver. It's also a backup system. It's many things.
               | 
               | You cannot replicate R2 on a single machine. R2 delivers
               | locality and redundancy. Arguing over its title is
               | irrelevant; the reason to point out that it is a CDN is
               | to establish requirements for replacing it with a
               | homebrew solution.
               | 
               | The lowest practical cost for reliable machines is
               | arguably VPSes at around two dollars a month. Those VPSes
               | tend to give you 100-250 megabits of traffic monthly with
               | around 1gig of storage. Assuming two for redundancy,
               | you'll get maybe 500 megabits for about $4 a month. (I
               | personally would not feel safe with two, but we're
               | talking about cost cutting.)
               | 
               | That same 1g of storage on R2 falls in the free forever
               | tier.
               | 
               | To eat up the $4 a month in class B actions, you will
               | first need to consume the 10 million free events, then
               | spend 36 cents per further million. This is a further
               | 11.1 million, or 21.1 million requests per month.
               | 
               | Most low-tier VPSes will struggle around 10 requests per
               | second using NGINX, because their disks and memory are
               | terribly over-burdened. There are 86,400 seconds in a
               | day, suggesting you'll get around 860,000 requests per
               | day if you don't consider time shaping. Considering time
               | shaping - you don't have full requests coming in at every
               | time zone - you'll most likely have closer to 350,000
               | realistic. This means that your two VPSes will, just
               | barely, be able to close the same amount of traffic.
               | 
               | Very, very slowly. You'll be lucky to get 200ms responses
               | locally, and to get VPSes that cheap, you'll need one in
               | the Netherlands in the burden corridor, and one in the
               | United States, most likely in Texas or Georgia.
               | 
               | Most of your customers will now be getting 300ms round
               | trips.
               | 
               | Why? Because you wanted to save against four dollars a
               | month. This isn't even enough money to biggie-size a
               | value meal.
               | 
               | All so that you could make a two-computer CDN on VPSes.
               | 
               | In the meantime, I've worked at two unicorns and neither
               | of them had traffic anywhere near this large.
               | 
               | You can also do the math with an unmetered 1u. Ten
               | megabit unmetered with 95% availability typically goes
               | for about $19 a month, or $30 a month with a cheap 1u
               | attached. That's about 3.2t of traffic a month.
               | 
               | As soon as you try to put actual specific numbers to it,
               | and calculate actual specific costs, this just falls
               | apart.
               | 
               | .
               | 
               | > There are at least 10 (if not more) CDN providers out
               | there that have footprints in several thousand
               | datacenters
               | 
               | There's more than two thousand of them from the United
               | States alone. Akamai's CDN has over 400,000 nodes at over
               | 50,000 POPs.
               | 
               | I can think of more than 40 in-house just from the US of
               | that size (eg Netflix, Steam.)
               | 
               | What's your point?
               | 
               | .
               | 
               | > R2, B3, and Wasabi entering the game
               | 
               | Wasabi has been around since 2015. B2 isn't really the
               | same kind of thing as R2 or S3.
               | 
               | .
               | 
               | > because CDNs are a competitive market ... Object stores
               | are becoming a competitive market
               | 
               | Object stores have been around decades longer than CDNs.
               | 
               | .
               | 
               | > but buying your own machines and using Ceph is still
               | cheaper.
               | 
               | Not really, no. I just did the numbers, and they did not
               | pan out the way you claim.
               | 
               | Buying your own machines? You're lucky to find a 1u for
               | less than $1500, and colo for a 1u is typically at least
               | $40/mo. That covers 30 million writes and 300 million
               | reads a month for six months at R2. (For scale, the
               | Washington Post gets 86 million uniques a month, so if
               | WaPo is doing four hits per load, which seems high if you
               | sprite your images, then you're talking about larger than
               | WaPo traffic. Spriting your images is a lot less work
               | than building a CDN from scratch.)
               | 
               | The R2 cost for Washington Post is about $230 a month.
               | You ... you really want to fight a cost like that, at a
               | scale like that?
               | 
               | At that price, you're saving about $190 a month on
               | bandwidth, so disregarding the cost of upkeep, it takes
               | about eight months of WaPo tier traffic per slab to break
               | even on each $1,500 slab.
               | 
               | In order to compare against R2 you need multiple
               | locations (redundancy is critical to safety,) so for this
               | to make sense, you need to be able to deliver multiple
               | Washington Posts of traffic. If you can do that, well, I
               | can make $230 a month not a big deal to you. Let's talk.
               | 
               | If you're trying to save money against four dollars a
               | month, which I think was the actual realistic price here,
               | I'd warrant it would be a good idea to look at your
               | salary, convert it to hourly, and figure out how many
               | years you'd have to run that savings to recoup your first
               | hour of thinking about it. (Heck, I would even say that
               | about $230 a month - after all, if you make $125k that's
               | about an hour of your time, and I doubt even a very
               | talented engineer could set up a distributed R2
               | replacement, including buying and siting machines, that
               | fast.)
               | 
               | The vertical step cost is larger than the total cost in
               | almost any practical system.
               | 
               | In order for the self-hosting costs to pan out against a
               | distributed system like R2, you need to be looking at
               | weekly terabytes of traffic and hundreds of nodes.
               | 
               | Respectfully, no, doing this yourself isn't actually
               | cheaper. Not even close. You'll need a cost outlay to
               | convince me, and it'll have to explain why the one I just
               | did is wrong.
               | 
               | A realistic "buy your own hardware" CDN^H^H^Hobject store
               | doesn't have barebones 1us with tiny drives. A realistic
               | object store has a couple attached JBODs. (But you know,
               | this is some individual's personal file server, which has
               | been lionized into being a "competitor to R2" by ignoring
               | almost everything that R2 actually does.)
               | 
               | And of course, the second you actually pull that cost
               | outlay off, I'm just going to start talking about how
               | much cheaper it would be to build your own datacenter,
               | and then your own internet. Because as long as we're not
               | focusing on there being no business need to justify the
               | outlay, the larger you build, the better.
               | 
               | But it's "object hosting," not an R2 competitor, and you
               | want to self-build. So maybe we just knock this down to
               | two nodes. And then it's only $4,200 up front and $1,700
               | of traffic a month to break even, plus engineer costs for
               | building, maintaining, and debugging it.
               | 
               | (And honestly, if you're just using it for storage, why
               | not just set up FTP or RCP?)
               | 
               | So I googled "how much traffic should your website get."
               | I got hubspot.
               | https://blog.hubspot.com/blog/tabid/6307/bid/5092/how-
               | many-v...
               | 
               | That article - and I'm not entirely certain that
               | authoritative even exists here if someone defines things,
               | which I haven't - but that article is probably better to
               | listen to than me, at least.
               | 
               | They claim they surveyed about 400 "traffic analysts."
               | There's a common sense line here of "they work for people
               | who can afford entire staff for a job like that," so I
               | guess I assume by default that these folks mostly work
               | for very large sites, and the remainder merely for large
               | sites.
               | 
               | Like I don't think any private pages with 200 hits a
               | month have a "traffic analyst," you know?
               | 
               | Sure enough, even in that survey, 46% of them were in the
               | 1k-15k a month bucket, and less than half a percent were
               | in the 10m+ bucket. And remember, we needed 300 million
               | per server-month to break even.
               | 
               | My strongly held, evidence based opinion is that the
               | scale at which CDNs are cost-competable is fundamentally
               | an irrelevant scale to all but the very largest of sites,
               | in the way that a local single-location restaurant should
               | not be looking into making its own flatware or furniture
               | to control overhead.
               | 
               | And none of this counts the engineer costs.
               | 
               | My opinion is that in order to cost compete against CDNs,
               | you need to have hundreds of large websites as customers.
               | 
               | My opinion is that viewing R2 and S3 as "just a place to
               | store files" is akin to viewing a car as a place to
               | listen to music - not even the primary use case. S3 has
               | had http access from day one.
               | 
               | And yes, you can make a far cheaper radio than a car. But
               | in general, if you try to sell the result to car
               | purchasers, what I believe you will find is that you've
               | misunderstood what the market is attempting to purchase.
               | 
               | My experience is that 90% of the people I know using S3
               | are using it for web fronting directly, or to back
               | CloudFront.
               | 
               | .
               | 
               | Null hypotheses are important. So is actually having the
               | ability to use what you build.
               | 
               | Imagine how cheap I could make the CDN if you just keep
               | adding zeroes to the userbase all day, right? But it
               | would be a hollow demonstration, because I can't actually
               | deliver the customers to justify it.
               | 
               | The reason I've never heard of Ceph is that nobody I know
               | uses it. Nobody I know uses it because nobody I know
               | would invest this much effort into getting rid of a
               | couple dollar a month datastore.
               | 
               | I know thousands of people who are making SAASes and
               | other sites.
               | 
               | I am of the opinion that recreating object stores and
               | CDNs is one of the smallest levers that you can think
               | about turning, unless you're some starkly uncommon kind
               | of site like a video host.
        
               | mappu wrote:
               | Thanks for the long comment, there's a few insightful
               | parts, and a few parts i wanted to reply to:
               | 
               |  _> R2 is actually a CDN. So is S3. Yes, I 'm aware, the
               | vending companies also sell different products called
               | CDNs. Nonetheless, R2 and S3 (unlike B2) are CDNs. You
               | can make distributed hits to it across the planet to
               | addresses you don't control. You can hit it directly from
               | the web, and it will serve to you from a local node you
               | never knew anything about. Its goals are speed and
               | locality. You can look up programmers debating whether to
               | use CloudFront or S3 as their CDN based on time to
               | invalidate vs actual speed and cost control._
               | 
               | S3 does not serve you from a local node (by my
               | understanding of 'local'), it always goes to the source
               | AWS region. It can take advantage of availability zones
               | but it is still regional. It is missing the "D" part of
               | "CDN". Perhaps this argument is just semantics, but the
               | conventionally held understanding of "CDN" is that it's
               | much much more widely geo-distributed. Otherwise you're
               | right that they both provide a storage function.
               | 
               |  _> Wasabi has been around since 2015. B2 isn 't really
               | the same kind of thing as R2 or S3._
               | 
               | B2 added an S3-compatible API, so they're now directly
               | comparable.
               | 
               | Other recent entrants in this market are IDrive e2 (4
               | USD/mo/TB) and Storj (4 USD/mo/TB). It's definitely a
               | competitive market aside from the anti-competitive
               | pressures that keep people using S3 anyway, such as zero-
               | rated transfer within an AWS region to your other AWS
               | resources.
               | 
               | Storj is the closest competitive object-store to a CDN in
               | that it is possible to get data from the nearest
               | geolocated node, although you'll (A) have to use the
               | non-S3-compatible version of their API to do it, and (B)
               | tolerate a certain degree of crypto-adjacency,
        
               | pclmulqdq wrote:
               | R2 is not globally distributed and neither is S3. All of
               | my S3 buckets are in US-East. See here:
               | https://community.cloudflare.com/t/cloudflare-r2-doesnt-
               | dist...
               | 
               | R2 is not a CDN. S3 is not a CDN. Both are object stores.
               | They store objects with some redundancy in a local area.
               | CDNs are globally distributed. Object stores usually
               | aren't. You can use an object store for backups and for
               | holding webpages, but that doesn't make them backup
               | services or webservers either. Using only S3 (without
               | deduplication or compression) for backups is ridiculously
               | expensive, and using it as a webserver limits you to a
               | static site.
               | 
               | In terms of the costs, if you are small, the free tiers
               | (and cloud platforms in general) are great. If you are at
               | the point where you are paying $5-10k/month, you are
               | almost certainly overpaying by using cloud services.
               | 
               | Also, you don't even need an object store for a small
               | app. You can use a server with a hard drive if you have a
               | lot of data to store (plus backups and a redundant server
               | in another place for HA). Put a CDN in front of it - a
               | real CDN - and don't worry about it. Pay the monthly cost
               | for the CDN. Also pay for your backups.
               | 
               | It's fine if you want to pay AWS so that you can focus on
               | things other than efficiency. Most SaaS companies make
               | that trade. But you _are_ paying for it.
        
             | aetherson wrote:
             | There is plenty of competition for AWS, so we would, in
             | terms of economic theory, expect that AWS charges you
             | mildly more than it costs them to provide the service,
             | unless they have any particular economies of scale that
             | people like GCP and Azure (and the other less-known cloud
             | offerings) do not have access to. Which seems unlikely to
             | dominate the costing.
             | 
             | AWS may be able to charge a surcharge for being the biggest
             | and best-known cloud offering that most people have
             | expertise with, raising the de facto cost of switching.
        
           | fabian2k wrote:
           | Bandwith is obviously extremely expensive on AWS. But even
           | EC2 is easily 5x as expensive as renting a dedicated server.
           | Of course this is a deeply unfair comparison as dedicated
           | servers don't have the flexibility of an EC2 instance and the
           | integration into other services. So if those matter to you
           | then the cloud is certainly worth it. But you are paying a
           | lot more for CPU, RAM and storage compared to other options.
           | You are also getting a lot more with AWS, but you're only
           | saving money if you can fully exploit the flexibility of the
           | cloud and scale the resources very well. And even then if you
           | need a lot of egress you're screwed on the costs in any case.
        
             | FpUser wrote:
             | >"dedicated servers don't have the flexibility of an EC2
             | instance and the integration into other services."
             | 
             | I understand that there are many cases when all of that you
             | mention is needed but for the vast majority of normal
             | businesses having main and standby servers on Hetzner / OVH
             | with on premises backup would cover all their needs like
             | for ever with stellar performance. All at the fraction of
             | the cost. Many business owners just do not know it as
             | they're not technical and their tech stuff for some
             | inexplicable reasons are all for doing "cool" cloud
             | architectures at owner's expense. And of course that
             | magical "cloud" word sells itself. So many times when I
             | come to do something a new client I hear the owner say
             | proudly - we are on a cloud. It takes some careful efforts
             | to explain to them that they're just wasting money while
             | not upsetting them in a first place.
        
         | ehutch79 wrote:
         | When you're a small team, possibly solo, having someone else
         | deal with hardware is key.
         | 
         | Focusing just on what you're working on is great.
        
           | P5fRxh5kUvp2th wrote:
           | assuming the money isn't an existential risk, that's
           | absolutely true.
           | 
           | I was commenting on the expense rather than why you would (or
           | wouldn't) want to use cloud.
        
         | goodpoint wrote:
         | > These cloud providers are, by definition, charging you more
         | than it would cost you to run it yourself
         | 
         | That's pretty obvious but years ago you would have been
         | downvoted into oblivion for writing that.
        
         | HWR_14 wrote:
         | You also get a lot of finely incremental costs and the ability
         | to grow. Like, can you really buy 0.1% more IT people? Well,
         | yeah if you already have a thousand. Otherwise you have to buy
         | in bigger chunks. So you save money because you can timeshare
         | talent and hardware. And the ability to write a larger check to
         | scale is valuable for consumer facing applications you're
         | hoping by to grow.
        
           | zo1 wrote:
           | Sure you can. Ask one of your devs to spend X amount of time
           | on something. Might not be an exact 1 for 1 or 0.1% but it
           | still beats the many many hours you'd waste architecting
           | around cloud-specific issues and nuances. Or the cost of
           | dealing with the friction that comes when you want to do
           | anything with cloud in a manner that isn't cloud-supported.
        
             | HWR_14 wrote:
             | Devs love being told to set up servers! And always do so
             | well. After all, computer person is a fungible skill set.
             | 
             | I mean, it is fungible enough that they can learn, but
             | they'll probably make a lot of mistakes along the way and
             | be less efficient.
        
               | P5fRxh5kUvp2th wrote:
               | Any developer that can't setup a server has fundamental
               | gaps in their skillset.
        
               | unregistereddev wrote:
               | On a trivial level I agree, but only on a trivial level.
               | Most developers will not set up a server in the most
               | secure, maintainable manner. They're going to miss
               | important things:
               | 
               | - Disable password-based auth for ssh (require key-based
               | auth) - Enable fail2ban or similar to slow down brute-
               | force login attempts - Configure firewall - Install
               | monitoring tools, antivirus, possibly backup daemons, etc
               | - Setup a sane swapfile for your use case, and configure
               | monitoring tools to alert when memory pressure gets too
               | high - Setup disk mounts, configure monitoring tools to
               | alert when disk space is low, and consider a cron job to
               | automatically clean up tempfiles - Either set up
               | automated updates (typically excluding kernel upgrades),
               | or have a standard schedule for manually applying updates
               | 
               | ...and probably other things that I'm forgetting because
               | I'm a developer, and it has been years since I've been a
               | sysadmin.
        
               | olddustytrail wrote:
               | I've done the sysadmin, DevOps, SRE journey.
               | 
               | A couple of others spring to mind:
               | 
               | - if you're not regularly testing your backups, you don't
               | have backups
               | 
               | - monitor SSL certs for expiry. It's amazing how many
               | outages I've seen because this was missed.
               | 
               | - are you allowing direct connection to your server from
               | the internet, or do you want to go through a bastion
               | host. Do you need 2FA on this, say because you need to
               | meet ISO27001.
               | 
               | - do all the above through CM (like Ansible or Puppet)
               | for obvious reasons.
        
               | P5fRxh5kUvp2th wrote:
               | I have a rant here about security being mostly bullshit.
               | Not even sysadmins can setup a server "in the most
               | secure, maintainable manner".
               | 
               | You don't hire a dedicated sys admin because they know
               | some voodoo magic developers don't, you hire a sys admin
               | because they have the _TIME_ to watch for security
               | updates. And you _PAY_ a company like RedHat to ease that
               | burden.
               | 
               | A dedicated sysadmin will typically do a better job than
               | a non-dedicated developer, but if that developer cannot
               | do a passable job, that's a gap in their skillset.
               | 
               | I may not be a network engineer, but I damned well
               | understand DNS, MPLS, networking, gateways, and the like.
               | Any developer who doesn't has gaps.
        
               | kingrazor wrote:
               | Maybe this says more about the devs I've worked with than
               | devs as a whole, but most devs I've personally met know
               | next to nothing about servers, and wouldn't be able to
               | set one up if you asked them.
        
               | alex_suzuki wrote:
               | Someone missed the DevOps revolution...
        
               | zo1 wrote:
               | That's a failing on our industry to some degree then. We
               | automate everything and fix gaps with technical solutions
               | all the time. But now, 20 years after the internet became
               | huge, we think that we still haven't figured out how to
               | easily provision servers. We have, we just let AWS et al
               | capture all the value of that discovery.
               | 
               | Yes I exaggerate a bit, but I do so to make a point stand
               | out.
        
           | LanceH wrote:
           | And, importantly, you can turn off your commitment to
           | spending in a second.
        
             | HWR_14 wrote:
             | I thought that was less true when you started signing deals
             | for discounts. Commit to a spend of X and they'll reduce
             | your bill. That kind of stuff
        
             | whatever1 wrote:
             | Making IT an OPEX problem is the main reason that C-suites
             | love cloud providers
        
         | elcomet wrote:
         | > These cloud providers are by definition, charging you more
         | than it would cost you to run it yourself
         | 
         | That's totally non-obvious and certainly false in most cases.
         | Cloud providers have huge economies of scale, which certainly
         | makes it less expensive to use cloud if you factor in all your
         | costs related to running it yourself.
         | 
         | This is like saying buying furniture is more expensive than
         | making your own furniture because they have a profit margin.
         | Even if you don't account for your own time, it will certainly
         | be false as you'll pay your raw materials much more.
        
           | unity1001 wrote:
           | > which certainly makes it less expensive to use cloud
           | 
           | AWS would pass that difference to Amazon shareholders than
           | passing them to you.
        
           | thehappypm wrote:
           | The furniture example is great. If I wanted to build an Ikea
           | table from scratch, I could not do it for less than the $20
           | they charge, given the cost of materials available to me, and
           | the fact that I can't buy a cup of paint for $2.
        
         | hotpotamus wrote:
         | I mean, in theory all services are sold to you with the goal of
         | making a profit, so there must be some margin baked in, but AWS
         | has enough margins to run the Amazon retail store as basically
         | a non-profit while also making enough money to finance Bezos's
         | penis rocket rides.
        
         | rossdavidh wrote:
         | I think some of this is due to the presence (until just
         | recently) of abundant VC money, encouraging tech companies to
         | trade of money for time by outsourcing their server admin (or
         | part of it, anyway). It's one less thing you have to hire for
         | and spend time getting good corporate policies and procedures
         | on.
         | 
         | But, now that the emphasis seems to have shifted from growth-
         | at-all-costs to where-can-we-cut-costs, I think there may be
         | more organizations with a large enough server load to realize
         | that they can do it cheaper (or even just get a different cloud
         | provider to do it cheaper).
        
           | P5fRxh5kUvp2th wrote:
           | imo this is the right takeaway.
           | 
           | You see a lot of posters commenting that it isn't obvious
           | that it's more expensive, but that's a large part of why I
           | said I'm flabbergasted at how many people don't find it
           | obvious.
           | 
           | It is obvious, someone may still choose to pay it because
           | there's a natural tradeoff, but it's absolutely obvious.
           | 
           | I suspect most developers don't find it obvious because
           | they've never dealt with actual servers themselves.
           | 
           | It's akin to arguing changing the oil on your vehicle is more
           | expensive than paying someone to do it. No, it's more
           | convenient, and that may make it worth the price.
        
           | dkarl wrote:
           | It would be shocking if, in the long term, there were not
           | cloud providers with lower prices than any but the biggest
           | companies could achieve by themselves. AWS will have to work
           | hard to keep giving customers a reason to pay the premium.
        
             | nithril wrote:
             | Still waiting for OVH or Scaleway to be really multi AZ
             | (>2) with a network that cannot be cut by anyone...
        
       | alberth wrote:
       | Dedicated hosting providers.
       | 
       | I'm so amazed that somehow people completely forget that for
       | literally decades, web host provided dedicated hosting options at
       | fantastic prices.
       | 
       | Yes, loooong time ago - to get your dedicated server might have
       | taken a few hours to provision and the instant server access that
       | AWS brought should not be discredited.
       | 
       | But large numbers of web host today allow you to programmatically
       | spin up a dedicated web host instantaneously and at a fraction of
       | the cost.
        
         | goodpoint wrote:
         | Not to mention having your own datacenter, which is even
         | cheaper (unless you do something stupid).
         | 
         | EDIT: to the naysayers: I worked in various companies (from
         | tiny to huge) maintaining their own DCs and had access to
         | financial data. And I was involved in the setting up new DCs.
        
           | MajimasEyepatch wrote:
           | Having your own data center is almost never cheaper nowadays
           | unless you have very specific hardware needs. And it is far,
           | far, far slower to build your own data center than to just
           | start using a cloud provider.
        
           | nithril wrote:
           | Might not be so obvious when summing all the cost (apex,
           | opex). Not mentioning the difficulty to find experienced
           | people to maintain it. Unless you are really big of course
        
         | kaptainscarlet wrote:
         | Hosting providers are a pain. My server was down for full day
         | because the provider was being DDoSed. AWS has DDoS protection
         | by default. If I was on AWS, my server would not go down due to
         | network DDoS attacks. That is just one of many things AWS does
         | for you that many hosting providers don't.
        
           | doublerabbit wrote:
           | My hosting provider does and successfully too. Maybe it's
           | time to move providers, I would.
        
           | mwcampbell wrote:
           | There are also affordable dedicated server providers with
           | DDoS protection, like OVHcloud.
        
         | that_guy_iain wrote:
         | > Yes, loooong time ago - to get your dedicated server might
         | have taken a few hours to provision and the instant server
         | access that AWS brought should not be discredited.
         | 
         | At one point it took often days for a dedicated server to be
         | set up. We also didn't have such nice provisioning tools.
         | 
         | Now it just seems like cargo cult to use cloud providers as the
         | only option. People just completely discount dedicated servers.
        
           | frognumber wrote:
           | In most businesses I've worked in:
           | 
           | developer costs >> infrastructure costs
           | 
           | An AWS large server is around $500/year, which is about 1-2
           | developer hours (with taxes, overhead, etc) at the cost
           | scales last time I priced this out. That's crazy expensive in
           | the absolute, but if it saves a couple of hours, it makes
           | sense.
           | 
           | PaaS providers cost even more. I've gone with those in the
           | past, since it basically eliminated dev-ops. The major
           | downside wasn't cost, so much as flexibility.
           | 
           | Dedicated servers start to make sense for:
           | 
           | - The very low end (e.g. personal use, or hosting something
           | long-running for a small business)
           | 
           | - The very high end (e.g. once cloud costs start to hit
           | hundred of thousands of dollars per year)
           | 
           | On the very high end, cloud providers will often cut a deal,
           | though.
           | 
           | My problem with AWS, recently, has been reliability. Servers
           | crash or have performance degradation a bit too often. That
           | leads to developer costs, and might be what pushes me back to
           | dedicated.
        
             | EricE wrote:
             | As the OP points out, the real AWS costs are in transit -
             | not the server costs. Their app generates a TON of traffic.
        
             | KronisLV wrote:
             | > An AWS large server is around $500/year, which is about
             | 1-2 developer hours (with taxes, overhead, etc) at the cost
             | scales last time I priced this out. That's crazy expensive
             | in the absolute, but if it saves a couple of hours, it
             | makes sense.
             | 
             | For comparison's sake, the net salary for a software
             | developer in Latvia is roughly between 950 and 2800 euros
             | per month: https://www.algas.lv/en/salaryinfo/information-
             | technology/pr...
             | 
             | Given the above number, some calculations of the gross
             | salary land at between 1350 and 4000 euros per month:
             | https://mansjumis.lv/darba-algas-kalkulators
             | 
             | So essentially one of your developer days would represent
             | weeks of work for a Latvian developer (depending on
             | additional expenses/overhead), which I find curious.
             | 
             | Developer costs are still likely to be higher than
             | infrastructure costs, but not nearly by as much, which
             | might explain why I've seen a lot of on-prem deployments
             | locally, and people shying away from going all in with
             | cloud technologies.
             | 
             | I wonder whether these differences are more pronounced for
             | other countries where developers can be comparatively less
             | expensive (at least in relation to using the cloud and/or
             | hardware), like India, Russia and elsewhere, how much the
             | development culture differs. Also how many would opt for
             | alternative providers like Hetzner, DigitalOcean, Vultr,
             | Scaleway, OVH and so on...
        
             | JohnHaugeland wrote:
             | > In most businesses I've worked in: > >developer costs >>
             | infrastructure costs
             | 
             | That's what we said at my first Kleiner company. All the
             | time.
             | 
             | Pretty soon we had a $100k/mo server bill.
             | 
             | We were just webhosting. There was no need for any of it.
        
             | macinjosh wrote:
             | developer costs >> infrastructure costs is a shallow way of
             | looking at things.
             | 
             | You should be thinking of dev time as in _investment_ not a
             | cost. Invest developer time into high value activities like
             | creating robust services you own. Not into low value
             | activities like coding against a dozen different
             | proprietary, Amazon owned APIs that can change, go away,
             | and become more expensive any time they feel like it.
        
             | isoprophlex wrote:
             | 1-2 developer hours?!
             | 
             | Anyone looking to hire an ML guy for 250-500$/hr flat fee,
             | hourly rate, no taxes, no healthcare, cancel any time...
             | get in touch!
             | 
             | For that money I'll gladly bark like a dog, walk on all
             | fours and fetch your slippers.
        
               | Cwizard wrote:
               | To add a datapoint, a company I worked at budgeted
               | developer time at ~700euro/day. However almost no one
               | made more than 300EUR/day. The rest came from office
               | space, your manager's salary, laptop, extra benefits,
               | heating, consultants to tell your managers what you
               | already know, idk what else.
               | 
               | In the US salaries are a lot higher so in that context I
               | can see 300-500$/day be realistic.
               | 
               | Really surprised me... Always made me wonder if all that
               | money was being spent well
        
               | [deleted]
        
               | dehrmann wrote:
               | That number would actually be the opportunity cost of
               | developer time.
        
               | elephantum wrote:
               | I have to mention, that what you described is not what ML
               | guy is supposed to be doing.
        
               | yjftsjthsd-h wrote:
               | Surely they mean to get training data for the robot;)
        
               | simfree wrote:
               | Well, it's either that or tuning/training/trimming a
               | model to fit your intended use case...
        
             | doublerabbit wrote:
             | Just wait until people hear about colocation.
        
             | that_guy_iain wrote:
             | > In most businesses I've worked in:
             | 
             | > developer costs >> infrastructure costs
             | 
             | What I've expected alot is the businesses are fundamentally
             | struggling to find sysadmins therefore paying for
             | infrastructure costs are there only option. And now more
             | and more sysadmins are "AWS devops" who literally only know
             | how to manage a AWS stack with many struggling even basic
             | stuff such as managing to figure out how many resources
             | they will need since more and more stuff are autoscaled.
             | 
             | They can hire devs but can't hire sys admins. Hell some of
             | the time they can't even hire devs.
        
             | bratao wrote:
             | I consider this a fallacy. I'm on the dedicated server
             | camp, but had to use AWS in a "everything in cloud"
             | company. I spend way more time with Lambda/Cloud
             | formation/ECS shenanigans than the terraform recipe I use
             | in dedicated servers. And this is not even considering the
             | higher latency between services and how hard is to debug
             | it.
             | 
             | In your case, you are not exchanging 2 hours of a developer
             | for a higher bill. In my experience you will get a higher
             | bill and 5 hours of a cloud expert, and probably a solution
             | that for every change or problem, you will have to call
             | this expert as no one know what to do.
        
               | nucleardog wrote:
               | You could say this about... pretty much anything you're
               | not well-versed in.
               | 
               | Trying to use a thing that accomplishes the a
               | similaroutcome but in a totally different way requiring a
               | totally different skillset, it's not surprising that you
               | found yourself needing someone who does have that
               | skillset to assist.
               | 
               | A Linux server guy trying to manage Windows servers will
               | probably need help from a Windows server guy. A car
               | driver trying to fly a Cessna somewhere will probably
               | need help from a pilot.
        
               | maerF0x0 wrote:
               | > the dedicated server camp,
               | 
               | > spend way more time with Lambda/Cloud formation/ECS
               | 
               | As you said, it's not your expertise. No one said AWS
               | could be used well by non-experts. The point isn't the
               | learning curve, but the terminal velocity once you get
               | there.
        
               | jacobyoder wrote:
               | FWIW, I'm in your camp mostly too.
               | 
               | That expert may not know what to do either, or... will
               | contradict earlier folks and say "you need to redo all of
               | this pile..." and you'll be stuck.
               | 
               | I've no doubt there are some workloads that really do
               | _require_ a level of complexity that various cloud
               | systems offer. Much of what I 've seen doesn't require
               | it, but once someone starts down that road, they're
               | 'justified' in learning more and tying more stuff to the
               | cloud provider's way of doing things.
               | 
               | I cut my teeth setting up servers back in the 90s, and
               | that you can spin up 'full' servers (thinking various
               | shades of VPS) in a few seconds is crazy great. But we're
               | somehow now 'past' that... and we have mountains more
               | complexity to consider.
        
         | throwaway858 wrote:
         | The one big thing missing from dedicated hosts is an S3
         | equivalent. Sure, you can get a huge hard drive for cheap but
         | this will not have the durability requirements for storing your
         | precious data.
         | 
         | And if you try to use AWS just for S3 then you will pay a lot
         | extra for the bandwidth charges of bringing the data from S3 to
         | your server (something that is free if you were to use EC2 or
         | other AWS services).
        
           | theptip wrote:
           | Right -- that's the one implementation detail in the OP that
           | was interesting. It sounds like they ultimately used MinIO to
           | replace S3. I've seen people use Ceph, but it's apparently a
           | nightmare to operate a Ceph cluster. If you're on k8s I think
           | the "cloud native" way might be Rook, haven't looked into
           | that. Anyway, running an object store is painful.
           | 
           | Their notes here are a bit vague:
           | 
           | > When the migration reached mid-June, we had 300 servers
           | running very smoothly with a total 200 million cached pages.
           | We used Apache Cassandra nodes on each of the servers that
           | were compatible with AWS S3.
           | 
           | > We broke the online migration into four steps, each a week
           | or two apart. After testing whether Prerender pages could be
           | cached in both S3 and minio, we slowly diverted traffic away
           | from AWS S3 and towards minio. When the writes to S3 had been
           | stopped completely, Prerender saved $200 a day on S3 API
           | costs and signaled we were ready to start deleting data
           | already cached in our Cassandra cluster.
           | 
           | > However, the big reveal came at the end of this phase
           | around June 24th. In the last four weeks, we moved most of
           | the cache workload from AWS S3 to our own Cassandra cluster.
           | The daily cost of AWS was reduced to $1.1K per day,
           | projecting to 35K per month, and the new servers' monthly
           | recurring cost was estimated to be around 14K.
           | 
           | It says (briefly, in passing) that they used Cassandra to
           | implement the S3 API for their nodes, but maybe just to
           | replicate the S3 API that they were previously using? That's
           | an interesting choice I'd not heard of before. Perhaps all of
           | their individual files are quite small?
           | 
           | Then they moved to MinIO, which would be the S3 equivalent
           | that you are looking for.
        
             | 0x457 wrote:
             | Well, their layout is essentially a map from url to html,
             | so cassandra would work well here.
             | 
             | MinIO is AGPL-3 though or commercial license. Pretty sure
             | using it as cache would be considered combined work?
        
           | alberth wrote:
           | Lots of web host also have S3 compatible equivalent
           | offerings. They typically market it as "object storage".
        
           | pwinnski wrote:
           | That same logic works up and down the line. If what you need
           | is disk storage, then you are limited to dedicated hosts that
           | provide something akin to S3. There are some! But that's also
           | true when you want a database that isn't sqlite. Now you need
           | a dedicated host that provides something akin to DynamoDB,
           | and will manage it for you. Then you decide you need
           | queueing, and you can either install and manage that yourself
           | or look for a dedicate host that provides something akin to
           | SNS/SQS. And so on...
        
       | shrubble wrote:
       | I expect to see a great deal more of the "cheap and cheerful" AWS
       | migration stories in the future. With the tanking of the market
       | and (apparent) limits to growth being in the forefront, reducing
       | expenses will become more important.
       | 
       | Before, it was easy to justify almost any expense with the "we
       | just need to get 1% of this $100 billion market" and now it is
       | "hunker down and do everything you can to be ramen-profitable, in
       | order to survive and thrive".
        
       | Gregioei wrote:
       | So little real information...
       | 
       | So the team is now responsible for backups, hardware
       | ordering,.forecast etc?
       | 
       | How big is the team now compared to before?
       | 
       | Does it scale?
       | 
       | If you price it correctly and keep the free tier small, I would
       | either talked to AWS for better pricing or moved to another cloud
       | Provider.
       | 
       | S3 on AWS is a total no-brainer, minio on bare metal might mean
       | much more work and a bigger infra team than business actually
       | wants.
       | 
       | I would also love to know what optimizations are already in
       | place. Does cloudflare caching work? Are the results compressed
       | on rest? Is geolocation latency relevant?
       | 
       | Why even Cassandra? Are websites not unique? Wouldn't a nginx and
       | a few big servers not work?
       | 
       | But who knows? The article doesn't tell me that :-(
        
       | TheGuyWhoCodes wrote:
       | "We used Apache Cassandra nodes on each of the servers that were
       | compatible with AWS S3". What does this even mean?
       | 
       | Regardless, starting a new Cassandra cluster in late 2022?! I bet
       | they can save even more by just going with scylladb
        
         | joshstrange wrote:
         | That was also confusing to me as Cassandra is a NoSQL DB last I
         | checked. I found this [0] online that indicates with some extra
         | software you can talk to it like S3 but yeah...
         | 
         | [0] https://dzone.com/articles/s3-compatible-storage-with-
         | cassan...
        
       | [deleted]
        
       | m0llusk wrote:
       | There is not quite enough information here to be sure, but this
       | article highlights transmission costs. This particular business
       | model involves throwing around big chunks of data just in case
       | they end up being needed and then handing them back out in
       | response to potentially large numbers of requests. That would
       | make this particular usage pattern fit to exactly what AWS is
       | charging the most for. Also many alternative AWS services that
       | can be used to speed up or simplify services are not really going
       | to help with this case.
       | 
       | So an alternative way of interpreting this is more along the
       | lines of: We may have saved up to 80% of server costs by moving
       | from AWS, but you almost certainly won't save that much even if a
       | bunch gets spent on developing operations and tools.
        
         | varsketiz wrote:
         | You can save even more if your app uses richer formats than
         | images.
         | 
         | Also, if you are bigger and can start really negotiating with
         | hardware providers.
        
       | Joel_Mckay wrote:
       | In general, last time I looked at AWS it made sense from 2TB to
       | 30TB a month, and under 400k connections a day. If either range
       | was exceeded, than the service ceased to be the economical choice
       | when compared with CDN providers, and colo/self-managed
       | unlimited-traffic options.
       | 
       | For example, if you primarily serve large media or many tiny
       | files to clients that don't support http Multipart Types, than
       | AWS can cost a lot more than the alternatives. However, AWS is
       | generally an economical cloud provider, and a good option for
       | those who outsourced most of their IT infrastructure.
       | 
       | The article would be better if it cited where the variable costs
       | arose.
        
       | maxfurman wrote:
       | I feel like there's a middle step missing from this article (or I
       | just missed it reading quickly) - did they build their own data
       | center? Where are these new non-AWS servers physically located?
        
         | hotpotamus wrote:
         | Colo would also be an option. I run a mix of colo and cloud
         | currently.
        
         | fabian2k wrote:
         | You can easily rent space in a data center and put your own
         | servers in there or just rent dedicated servers. The cheapest
         | provider there is probably Hetzner, but then your servers are
         | in Germany. This gets you a real, full server for something
         | between 30-200 EUR per month. There are many other hosting
         | providers that offer dedicated servers also in the US, e.g.
         | OVH. This works the same way, they are usually more expensive
         | than Hetzner.
         | 
         | Renting a dedicated server is very similar to renting a VM. It
         | has a much higher provisioning time, usually hours to 1-2 days.
         | And it takes some upfront cost or a time commitment. But it
         | also has a lot more raw power. So if you don't require the
         | flexibility VMs and the cloud provide they can be a very
         | effective alternative.
        
           | systemvoltage wrote:
           | If you want rocksolid dedicated servers of utmost HA and
           | reliability, look no further than Equinix Metal:
           | https://metal.equinix.com/product/servers/
           | 
           | They run datacenters for other datacenter companies. Most
           | people have never heard of them because they're a $6B/year
           | revenue beast that runs behind the scenes; but they also
           | offer direct metal servers.
        
           | unity1001 wrote:
           | > The cheapest provider there is probably Hetzner, but then
           | your servers are in Germany
           | 
           | They have a US DC now. Though it only gives cloud servers.
           | However even those are still way cheaper than any
           | alternative.
        
         | teraflop wrote:
         | In addition, usually the main _financial_ effect of going from
         | AWS to on-premises is to replace a lot of your recurring
         | monthly costs with up-front capital costs.
         | 
         | The article spends a lot of time talking about how much lower
         | their monthly bill is, but it says nothing about how much they
         | spent to buy those 300 servers in the first place.
        
           | brandon wrote:
           | In my experience with modest specifications, you can source
           | servers from tier 1 vendors (Dell, HPE, etc) with 5 years of
           | support for less than a 1 year commitment for the roughly
           | equivalent AWS or GCE instances (without prepayment).
           | 
           | The monthly opex to house and power and cool those servers
           | isn't _negligible_, but if you're doing back of napkin math
           | comparing MRCs to Cloud you can just deduct the costs from
           | their bandwidth charges that have been marked up 10,000%
        
           | bombcar wrote:
           | If you want recurring monthly costs for capital devices there
           | are companies setup to do that for you.
           | 
           | They lease you the equipment. Yay.
        
           | sabujp wrote:
           | by servers i think they meant microservice
           | apps/binaries/vms/containers, etc not physical machines. I
           | also want to know what physical hardware they purchased,
           | where they racked them, how much colocation costs, etc, etc
           | and they go into none of that.
        
         | Syonyk wrote:
         | You can rent space in your local datacenters. If you're a
         | business, you probably rent by the rack (and can more or less
         | do what you want in the rack subject to power limits and
         | datacenter policies). If you're an individual, most datacenters
         | have some shared rack space (with controlled access, for some
         | value of controlled - usually a datacenter employee has to be
         | in there with you) that's rented by the U.
         | 
         | I put a box of mine in a local datacenter, to their... endless
         | confusion (not only do they not deal with random individuals
         | that often, I actually _read_ contracts so asked questions
         | about why I had to prove my worker 's comp payments and some
         | other weird stuff that they removed). Monthly rental payments
         | are about what I was paying in a range of cloud spend, but the
         | system is _far_ more capable, and I can do a lot more on it.
        
       | [deleted]
        
       | henning wrote:
       | OK, so they're now stuck maintaining their own Cassandra cluster.
       | How much does that cost?
       | 
       | If it costs you $1,000,000 a year to serve 1166 requests a
       | second, maybe you fucked up.
        
       | jacooper wrote:
       | Some S3 providers can give you free egress when using a CDN.
       | 
       | For example backblaze B2 offers free egress through Cloudflare,
       | Fastly, BunnyCDN.
       | 
       | https://help.backblaze.com/hc/en-us/articles/217666928-Using...
        
       | maerF0x0 wrote:
       | I've said this a hundred times and it seems not loud enough.
       | 
       | AWS is not cheap because of your server costs.
       | 
       | AWS is cheap because of elasticity, velocity (opportunity cost of
       | next feature), and reduced maintenance hours.
       | 
       | "The cloud" was never (afaik) was about getting a cheaper VPS. It
       | was about being able to get them on demand, give them back on
       | demand, and generally not have to maintain anything besides your
       | code (and maybe apply updates to datastores / AMIs)
       | 
       | Now, if those premises are not true for your startup/business,
       | then AWS is not the tool for you. I didnt see any analysis of
       | ongoing maintenance costs in the 800k saved, but will it take 1-2
       | FTE engineers to now be more oncall, more server upgrades, more
       | security patches etc? That's easily 1/2 that savings gone
       | already.
       | 
       | Edit: for the most part these attributes apply to GCP, Azure,
       | Heroku etc as well, its not just about AWS
        
         | draw_down wrote:
        
         | candiddevmike wrote:
         | Elasticity is a gamble. You're betting that you can save more
         | money investing in an elastic/on demand stack than what you'd
         | pay for static resources. Judging by how much the cloud
         | providers push this, the unknown cost to create/maintain auto
         | scaling infra/apps, and how intricate the pricing gets with
         | elastic resources/spots/faas, I still think the cloud providers
         | are coming out ahead vs folks using static compute.
        
           | TheCoelacanth wrote:
           | Not really. You're paying extra to not have to think about
           | how many static resources to allocate in advance.
           | 
           | For a mature business, that tradeoff probably isn't worth it,
           | but for a startup, the opportunity cost is too high. Spending
           | a lot on cost optimization doesn't make sense; you are better
           | off spending on growing income.
        
         | icedchai wrote:
         | Yep, the cloud was always about flexibility and convenience,
         | not cost.
        
         | wooque wrote:
         | Prerender as I can see is founded by Hungarian and most
         | employees are Hungarian.
         | 
         | Even if he hired 2 FTEs in Hungary to maintain (which I doubt),
         | it would eat 200k at most (probably much less), so they still
         | saved 600k.
         | 
         | For 800k he could probably hire 10 more people, to improve
         | development, sales, marketing, support, and that would be
         | better investment instead of burning money on AWS.
        
           | maerF0x0 wrote:
           | Fair enough, I assumed typical San Francisco/Silicon valley
           | model.
        
           | nagyf wrote:
           | I'm from Hungary, and I have to tell you, you are WAY off
           | with those salaries.
           | 
           | As a senior software engineer, you can make maybe $35k a
           | year, before taxes, if you are good. You can make it $50k if
           | you are very good.
           | 
           | 2 years ago I was making $23k (yearly, before taxes), before
           | I moved to Canada and started working at Amazon for $170k
           | usd.
           | 
           | Europe, especially the eastern parts of europe has an
           | extremely cheap workforce.
        
             | lossolo wrote:
             | > As a senior software engineer, you can make maybe $35k a
             | year, before taxes, if you are good. > ..especially the
             | eastern parts of europe has an extremely cheap workforce.
             | 
             | You can make that amount normally in Eastern Europe when
             | working on local market, you do not need to be good,
             | average is enough. If you are good then you can make 60k+
             | USD. And if you are really good you can make easily 100k+
             | USD working for US based company remotely.
        
         | shudza wrote:
         | This is one of the few accurate comments in this thread.
        
         | chucky_z wrote:
         | I think there's some "cheat codes" now involving some of these
         | things though. For instance, you can use _some_ of AWS but keep
         | your actual compute /networking out and have things like
         | security patching and server inventory be completely solved by
         | using AWS SSM. There's also options like EKS Anywhere to have
         | managed on-prem k8s for way, way cheaper than running it in AWS
         | proper. These kinds of services I think are the future for
         | hybrid/on-prem folks.
        
       | jaclaz wrote:
       | As a side note, I find this:
       | 
       | >Do you have any advice for software engineers who are just
       | starting out?
       | 
       | >Don't be afraid to talk with the customers. Throughout my
       | career, the best software engineers were the ones who worked with
       | the customer to solve their problems. Sometimes you can sack a
       | half year of development time just by learning that you can solve
       | the customer's issue with a single line of code. I think the best
       | engineers are creating solutions for real world problems.
       | 
       | to be very good generic advice.
        
       | registeredcorn wrote:
       | (Note: I have never done any professional work in cloud. I could
       | be completely mistaken. Feel free to correct me if I'm completely
       | off-base.)
       | 
       | It's a fascinating article, for sure. I would have been
       | interested to hear what their backup strategy looked like though.
       | 
       | One of the big benefits of cloud services, that I am aware of, is
       | the assurance that if natural disaster strikes, you don't lose
       | all of your data. I kind of got the impression that, more than
       | anything else, _that_ is what you are paying for. Data protection
       | and uptime.
       | 
       | I suppose big enough bills could lead a company to make the kinds
       | of changes that Prerender did, but when that disaster does
       | strike, and it is time to try and recover from a fire, flood,
       | earthquake, etc. the responsibility and _speed_ of getting your
       | customers back online is reliant completely upon your staff - a
       | staff who might be extremely shaken up, hurt, or pre-occupied in
       | taking care of their own affairs. I 'm not saying it's not
       | possible, but there is a kind of cost that comes in the form of
       | responsibility. It's a trade off that I would not fault many
       | people from avoiding.
        
       | alexchantavy wrote:
       | I wish the article went into detail about what hardware they used
       | for each server, what was their disaster mitigation plan, and
       | other considerations that you don't need to worry about with
       | paying for a cloud provider.
        
       | lakomen wrote:
       | In other news, we got wet in the rain SCNR.
       | 
       | Are you really saying that AWS and other clouds are expensive?
       | Say it ain't so :)
        
       | zc2 wrote:
        
       | hnrodey wrote:
       | I find this interesting, if nothing else. My first question is
       | what was the opportunity cost of focusing manpower to setting up
       | on-prem infrastructure that now needs maintained? What on the
       | product roadmap was sacrificed/delayed in exchange for the time
       | on this project? What are the projected future hiring costs to
       | maintain these servers (and applications like Cassandra!) going
       | forward? Nothing is free, and at just 4-5 additional hires they
       | will be giving back a large chunk of that $800k to employees. IDK
       | - maybe that's a fare trade-off to pump up the common man with
       | money instead of the establishment.
        
         | brodouevencode wrote:
         | This is what a lot of people miss when they talk about moving
         | to/from cloud providers. The marginal cost to add X more
         | servers in the cloud is basically nothing, whereas to set up a
         | new rack for on-prem requires requirements gathering, purchase
         | orders, finance approvals, someone being at the dock when the
         | UPS truck arrives, rack and stacks, etc. Those are one-time,
         | yet very real costs. These fall under cap-ex which accounting-
         | wise is treated very differently than op-ex. Now the cost in
         | the cloud bakes all that in, and is distributed around with
         | other users of the provider. Your accounting models are also
         | easier ("pay for what you use").
         | 
         | Couple that with the very well known fact that AWS has
         | outrageous data egress charges and there are patterns that can
         | emerge where you're still in cloud but not racking up massive
         | outbound data charges.
        
           | fabian2k wrote:
           | You can rent servers if you don't want to bother with this.
           | The choice is not only between doing everything yourself and
           | the cloud, there are a lot of options in between.
        
             | brodouevencode wrote:
             | Yes - that's a very fair point. It should still be
             | calculated in the cost, and I don't think the article does
             | a very good job of identifying the tradeoffs to OPs point.
        
       | mabbo wrote:
       | I'll always celebrate stories like this, but I also don't take
       | some kind of anti-AWS lesson from it.
       | 
       | This company saved $800k/year. Perfect time to go in-house with
       | this solution.
       | 
       | But when they were 1/10th this size, they'd only have saved
       | $80k/year. Does that cover the cost of the engineering to build
       | and maintain this system? Maybe not. And when they were 1/100th
       | the size, it would have been laughable to go in-house.
       | 
       | At the right time, you make the right transitions.
        
         | hintymad wrote:
         | People don't consider productivity? Maybe things have gotten a
         | lot better in the industry now. Otherwise, to rehash an older
         | comment on HN:
         | 
         | I'd like to remind everyone about Uber's experience: no
         | EC2-like functionality until at least 2018, probably even now.
         | Teams would negotiate with CTO for more machines. Uber's
         | container-based solution didn't support persistent volumes for
         | years. Uber's distributed database was based on friendfeed's
         | design and was notoriously harder to use than DynamoDB or
         | Cassandra. Uber's engineers couldn't provision Cassandra
         | instances via API. They had to fill in a 10-pager to justify
         | their use cases. Uber's on-rack router broke back in 2017 and
         | the networking team didn't know about it because their
         | dashboard was not properly set up and what the funk is eBPF?
         | Uber tried but failed to build anything even closer to S3.
         | Uber's HDFS cluster was grossly inefficient and expensive. That
         | is, Uber's _productivity_ sucked because they didn 't have the
         | out-of-box flexibility offered by cloud.
        
           | humanwhosits wrote:
           | and they had trouble moving workloads to the cloud because
           | bringing up new capacity was a giant set of circular
           | microservice dependencies
        
           | Melatonic wrote:
           | That also just sounds like Uber had hired crap talent....
        
             | GauntletWizard wrote:
             | And crap management to lead them. Uber hired plenty of
             | people smart enough to do better, but let the crap take the
             | reins and management failed to lead on anything.
        
         | xani_ wrote:
         | 80k ? Honestly _probably does_. Our ops team of 3 spends maybe
         | 10% of the time on the managing of few hardware racks we have
         | in our local colocation. There are even months where nothing at
         | hardware /hypervisor level is touched
        
           | oxfordmale wrote:
           | 3 people at 0.10% of their time is already 24K, assuming an
           | 80K salary for each of you. You don't mention patching
           | systems, or the time spend replacing the hardware racks every
           | x years. It is very easy to underestimate the cost of
           | maintenance.
        
         | lbriner wrote:
         | Yes, exactly this.
         | 
         | At what point do you have the time/money/confidence to invest
         | goodness knows how much in a data centre with space to grow, to
         | purchase an enormous amount of capital to have it all installed
         | etc. the building alone could eat that first years saving
         | easily.
         | 
         | How many people are now needed to fault-find bad
         | hardware/software/networks, to be on call for any problems? How
         | many calls out to the Electrician to fix some power issue?
         | 
         | How much to setup and run a large air-con system for the data
         | centre. Maybe not much in the US where aircon is common but
         | much more expensive in Europe.
         | 
         | The fact they could afford to do this over such a short time
         | period speaks to having a decent amount of cash on-hand.
        
           | jjav wrote:
           | > fix some power issue
           | 
           | > large air-con system
           | 
           | You wouldn't usually jump from AWS to buying up real estate
           | to build your own physical data center.
           | 
           | A sensible first step is to rent a rack at a colocation
           | facility. They handle power, cooling, redundancy, physical
           | access for you.
        
           | hamandcheese wrote:
           | > At what point do you have the time/money/confidence to
           | invest goodness knows how much in a data centre with space to
           | grow, to purchase an enormous amount of capital to have it
           | all installed etc. the building alone could eat that first
           | years saving easily.
           | 
           | Co-locating has no capital investment other than hardware,
           | and is pretty cheap.
           | 
           | A 40U rack of compute charged as equivalent ec2 instances has
           | a retail price easily of hundreds of thousands, if not a
           | million+ USD per year.
           | 
           | Suppose each U has a $10k capital cost to make the numbers
           | round, that is $400k in capital.
           | 
           | All this to say is that I don't think capital is as big a
           | factor as you might think.
        
             | Corrado wrote:
             | I feel that a lot of posts like this might be under-
             | representing the true costs of running your own hardware. I
             | was only tangentially associated with a large-ish operation
             | and I can tell you that there are loads of things that take
             | a lot of time but are often over looked. Things like
             | detecting and replacing bad hardware. HDs don't last
             | forever and when they go bad it's not fun; especially if
             | you have to source some specific model for your 7 year old
             | server platform.
             | 
             | Understanding your licensing and warranties is another huge
             | cost that people don't take into account. We used to spend
             | hours and hours figuring out if we could replace systems
             | and what it would cost us.
             | 
             | Finally, you have to dispose of all that hardware when it
             | gets too old or out of warranty. If you've never had to do
             | that you probably have no idea how hard it is to do it
             | correctly and so that it satisfies your SOC2 auditor.
             | 
             | All of these things (plus more) really add up. It's not
             | just purchasing the components and installing them in
             | racks. The management is probably even more expensive than
             | the hardware.
             | 
             | And all of these problems go away with a cloud provider.
        
               | Melatonic wrote:
               | Your VAR should be doing a lot of that work for you -
               | sure you need to understand licensing but if you have a
               | good rep they should be providing you with up to date
               | info on all of this.
               | 
               | HDD replacement is trivial and some COLOs can even do it
               | for you - and any good vendor will have a thing setup
               | where your appliance automatically notifies them and a
               | new drive is overnighted to your COLO. Sure - maybe a few
               | times a year you drive out to swap a drive.
               | 
               | If you do not have good monitoring setup then that is
               | entirely fixable - there are many stellar solutions these
               | days out there. Hardware is easier than ever.
        
             | boltzmann-brain wrote:
             | In my experience doing detailed projections of exactly
             | something like this - with racks full of GPU compute power
             | - the infrastructure has paid itself after one quarter,
             | maybe two, depending on what volume you're at. There very
             | rarely is a reason to use GPU compute on the cloud - and
             | advantages start as quickly as with just one single GPU.
        
         | vlunkr wrote:
         | Also your company needs to be mature enough to know really well
         | what your hardware requirements are. For a growing company,
         | it's really great to switch your RDS instance to a bigger type
         | because your database load has tripled in a few months and you
         | didn't know that was coming.
        
         | MajimasEyepatch wrote:
         | Thank you for bringing up the engineering cost. People always
         | look at this as just AWS > Bare metal or whatever, but there's
         | so much more to it than that.
         | 
         | If they saved $800k per year, and they have to hire four
         | additional ops engineers to run it at a cost of $400k per year,
         | then they actually saved $400k. Which is still substantial and,
         | all else being equal, sounds worthwhile.
         | 
         | If they saved $800k per year, and they have to hire ten
         | additional ops engineers to run it at a cost of $1 million per
         | year, then they've actually gone and burned $200k on something
         | that provides no additional value to the business or their
         | customers.
        
           | otabdeveloper4 wrote:
           | Hiring a person to do 'docker compose up' for you is orders
           | of magnitude cheaper than whatever AWS-specific knowldege is
           | needed to not have AWS crap its bed.
        
             | boltzmann-brain wrote:
             | not just "not have AWS crap its bed", but whatever
             | sacrifices you need to perform in order for amazon not to
             | come back with a bill that just so happens to be $1M higher
             | than you expected it to be because of some gotcha that
             | amazon profits off of, like (say) routing stupid traffic to
             | one of their servers, currently occupied by you, that would
             | be idle otherwise. Nice "mistake" to make.
        
             | billythemaniam wrote:
             | If their AWS spend is $1M/year, it is not as simple as
             | "docker compose up" on bare metal.
        
               | intelVISA wrote:
               | Might need some systemd as well then to be fair
        
           | tester756 wrote:
           | huge salaries + ten "ops engineers", lol.
           | 
           | I know *data centers* that run on a few naive 20 yos
           | admins/technicans + 1-2 "engineers" and all of them combined
           | do receive salary of $5-10k/month in east eu
        
             | citizenpaul wrote:
             | One time I was doing research on some really cheap data
             | centers in basically 3rd world countries. Just out of
             | curiosity to see how cheap it could really get.
             | 
             | One of the companies had a picture of one of their
             | "datacenters." It was something like 10 racks in a moldy
             | unfinished basement with visible water on the floor of what
             | I'm guessing was a residential building. Maybe they had
             | mopped the floor for the picture?
             | 
             | I thought it was strange they would put that picture on
             | their site and not some GCI or something. I guess at least
             | you know they are not lying about being a real place?
        
               | choletentent wrote:
               | I've just had a good laugh from your comment! Maybe water
               | on the floor was for cooling.
        
               | tester756 wrote:
               | I mean, the one I'm talking about is relatively
               | reasonable on infra
               | 
               | they've decent building, data center tier, so
               | power/network redundancy, etc, etc
               | 
               | they're just cheap as fuck when it comes to people
        
               | MajimasEyepatch wrote:
               | People are the most important part of any operation!
               | That's the last thing you want to cheap out on.
        
               | loloquwowndueo wrote:
               | Share the company name / picture please :)
        
               | citizenpaul wrote:
               | It was years back, I don't think I could find it again. I
               | wish I saved it. Finding the company in the first place
               | was a rabbit hole project. If my memory is correct is was
               | in Brazil.
               | 
               | I really should start a blog for some of the weird
               | research I do sometimes.
        
             | MajimasEyepatch wrote:
             | And there's no way in hell I'd trust them to run the core
             | infrastructure for a business worth hundreds of millions of
             | dollars.
        
           | jiveturkey wrote:
           | You're not including reliability and availability
           | projections, and the intangible cost of transferable skills
           | wrt infrastructure. (ability to hire sufficiently skilled
           | people to run it)
        
           | mbesto wrote:
           | Exactly. In the "old IT world" we call this TCO = Total Cost
           | of Ownership.
        
           | Dma54rhs wrote:
           | AWS knowledge and engineering doesn't come for free either.
           | People have built whole careers and businesses around it.
        
             | boltzmann-brain wrote:
             | Indeed, and in fact running own metal is an order of
             | magnitude easier than puzzling around the Brazil nightmare
             | that AWS is. Both people cost money. It's not like if you
             | go with AWS things run themselves.
        
               | monkpit wrote:
               | Brazil?
        
               | maxfurman wrote:
               | This is a reference to the film Brazil[0], which centers
               | on a labyrinthine bureaucracy
               | 
               | [0] https://www.imdb.com/title/tt0088846/
        
               | CreepGin wrote:
               | Wow! I thought it was a typo for "bizarre"... but now I
               | know
        
               | dalmo3 wrote:
               | > labyrinthine bureaucracy
               | 
               | This is a reference to the country Brazil [0], which
               | centers on...
               | 
               | [0] https://www.bbc.com/news/business-18020623
        
               | earleybird wrote:
               | . . . and ducting
               | 
               | https://www.youtube.com/watch?v=K9gO01pyv24
        
               | malfist wrote:
               | Brazil is Amazon's internal build system. Not sure what
               | GP is talking about here.
        
               | icedchai wrote:
               | You won't get the reference unless you've seen the movie.
        
               | moduspol wrote:
               | If that were true, then there'd be no value proposition
               | to AWS.
               | 
               | It is absolutely easier to use S3 than to create your own
               | fast, highly available, infinitely scaling storage
               | solution on your own metal. It requires more than zero
               | knowledge / expertise to use S3, but far less than it
               | would to implement and run yourself.
               | 
               | If you can accept that, then we already agree in
               | principle. It's just matter of where the line is drawn
               | for various services and use cases.
        
               | saiya-jin wrote:
               | Not necessarily. Probably they already have pool of
               | admins/devops that ran their systems for past 30 years,
               | but have 0 AWS experience. Also, most companies don't
               | need to scale infinitely, not everybody is building next
               | google (in fact, almost nobody outside SV is).
        
               | Kon-Peki wrote:
               | You can buy on-prem managed storage solutions. Like
               | PureStorage type things, where they call you up if
               | something goes wrong, and are FedEx'ing you replacement
               | parts before you even noticed that it had imperceptibly
               | failed over to the standby power supply or whatever.
        
               | agentultra wrote:
               | How many applications need infinitely scalable
               | distributed object storage?
               | 
               | I've worked at exactly one storage company that had a
               | customer that had exascale data. They were doing cancer
               | research as I recall and their test machines generated a
               | lot of data. I heard stories about CERN at conferences
               | but they also self host their data.
               | 
               | But those were outliers. All of the large and small
               | enterprises outside of that could fit all of their "big
               | data" in the memory of a single blade server and still
               | have plenty to spare. You can get machines these days
               | with many TB's of RAM.
        
               | treffer wrote:
               | This is my favorite joke.
               | 
               | Is the issue big data or small machine?
               | 
               | Memory size of the largest AWS/GCP/... instances is a
               | good indicator for small machine.
               | 
               | It doesn't mean I would go for the large machine. It just
               | means I won't subscribe to "big data" as a reason to do
               | X.
               | 
               | And it forces me to recalibrate this boundary regularly.
               | Looks like double digit TB is currently the memory
               | boundary for renting. I was still on the single digit TB
               | train.
        
               | dijit wrote:
               | There is a value prop for sure.
               | 
               | It's hard to quantify how best you'll be served but a lot
               | of people are following the mantra of "nobody got fired
               | for going AWS".
               | 
               | It makes sense for some people, others are cargo-culting;
               | yet more are fanning the flames of that cargo cult
               | because their pay check depends on it.
               | 
               | Sysadmins are/were paid much less than cloud native
               | devops people, and you need the same number of them
               | unless you keep things _very_ simple, which cloud
               | providers do not incentivise. One need only look at AWS
               | reference architectures.
        
               | icedchai wrote:
               | One _good_ DevOps person can achieve much more than a
               | single sysadmin. Most old school sysadmins were doing
               | everything manually with relatively little automation.
               | 
               | You are certainly correct about the the overly complex
               | AWS reference architectures. I've seen relatively simple
               | applications with just as much infrastructure code
               | (generally "terraform", occasionally CloudFormation JSON)
               | as application code. It's crazy.
        
               | jjav wrote:
               | > Most old school sysadmins were doing everything
               | manually with relatively little automation.
               | 
               | That's not true of any place I experienced in the early
               | to late 90s. If you meant earlier, perhaps, I wasn't
               | there.
               | 
               | The growth of perl, for example, was in great part from
               | the sysadmin community automating everything.
        
               | icedchai wrote:
               | Most of the scripts I was familiar with from that time
               | were one-offs. The code wasn't very reusable. They were
               | automating a task on one machine. Today's "DevOps" are
               | automating things across N machines. It is a matter of
               | scale.
        
               | jjav wrote:
               | > They were automating a task on one machine.
               | 
               | Remember that in those days you likely only _had_ one
               | machine. By  "you" I mean the whole department. Everyone
               | was logged into it and it handled email, talk, documents,
               | compilation and debugging, etc.
               | 
               | That doesn't take away from the fact that the sysadmins
               | were automating everything they needed to do, in the
               | enviornment that existed.
        
               | doctor_eval wrote:
               | Exactly - I used to be the sysadm on a single machine in
               | a medical laboratory in the 90s. It was a DG Aviion and
               | with the storage unit took up a whole room. Maybe 300
               | people used it simultaneously.
               | 
               | The idea that I would need to automate the installation
               | of an OS and applications on a fleet of machines was
               | never contemplated because it didn't make sense. I had
               | one machine and OS upgrades arrived in the post every 6
               | months or so - on cartridge tape.
               | 
               | It was about need, not competence.
        
               | icedchai wrote:
               | The places I'm talking about were mid 90's, dozens of
               | AlphaServers, Sun Sparcs, HP/UX systems, IBM AIX
               | machines, etc. It was more than a single machine, but
               | less than 50. There was shocking little automation.
               | Everything was a pet, with crazy NFS mounts all over the
               | place, custom scripts in /usr/local/etc not in version
               | control, etc. If there was ever a power loss, it was a
               | pain getting everything back up.
        
               | dijit wrote:
               | I think we're talking passed each other by talking
               | different eras.
               | 
               | I worked in the mid-00s and it wasn't a lot better in the
               | dev space. People passing USB keys to each other was
               | pretty common, SVN and CVS were around but there was a
               | lot of developer code outside of it and peer review was
               | going to someone's desk and walking through the new code,
               | nothing at all like what we have today for processing
               | change requests.
               | 
               | You're talking about systems administration when it was
               | going from pets to cattle, the primordial period where
               | people were automating but not applying software
               | development practices on themselves yet because even the
               | software development practices weren't well defined.
               | 
               | Sysadmin was always lagging 5y behind development w.r.t.
               | programming practises. The same is true today of devops.
               | 
               | tell me how many terraform repositories have unit tests
               | or infra scripts for that matter which would be much
               | simpler.
        
               | dijit wrote:
               | You're quite mistaken on the first point.
               | 
               | Most automation that you know of as "devops tools" are
               | borne from sysadmins.
               | 
               | Terraform was written by a sysadmin; puppet, ansible,
               | saltstack and cfengine too.
               | 
               | It is revisionist to think sysadmins were not automating
               | their jobs.
               | 
               | Old school sysadmins used to know C, bash and Perl.
               | 
               | New school devops just traded that for a handful of DSLs
               | and Python.
               | 
               | I'm sure some technicians working the IT department
               | managed to get by running scripts created by sysadmins
               | and claiming not to code, but it was definitely the
               | common case that 20 years ago sysadmins could code and
               | worked tirelessly to "automate themselves out of a job"
               | (literally a mantra i was told as a sysadmin 15 years
               | ago)
        
               | icedchai wrote:
               | I'm sure we all had different experiences with old school
               | sysadmins. The ones I'm familiar with (early 90's, ISP
               | industry) could do shell scripting and some perl. C was
               | _way_ out of their skill sets, except for a bit of copy-
               | and-paste.
               | 
               | The people who wrote tools like terraform and ansible
               | were _engineers_ with system administration skills. Those
               | are very rare.
        
               | dijit wrote:
               | I'm biased. I worked in teams with such people basically
               | my whole career.
               | 
               | The pay increased, the titles and tools changed but the
               | mentality didn't.
               | 
               | People just started beating their chest about devops and
               | pooping on the legacy of sysadmins which is what most
               | devops/SRE are.
               | 
               | Sounds like they were automating though.
               | 
               | "Cattle not pets" was a sysadmin mantra, but the business
               | wanted pets most of the time.
        
               | icedchai wrote:
               | There was certainly some automation being done by the
               | sysadmins I was familiar with (I did say "relatively
               | little" in my original post, meaning compared to today!)
               | It was definitely more pets than cattle at the places I
               | was familiar with. These were small-ish companies, and
               | the business didn't care as long as the systems worked.
        
               | dijit wrote:
               | Sounds like a "devops" in that same position would be
               | doing the same then.
        
               | Melatonic wrote:
               | Not necessarily true - many were doing tons of scripting.
        
               | late2part wrote:
               | except that s3 doesn't infinitely scale.
               | 
               | Try uploading 5PB of content and downloading it that day
               | at 1M RPS.
        
               | lakomen wrote:
               | Minio is FOSS and a lot cheaper. I don't see your point
        
               | dijit wrote:
               | I think that's a misrepresentation of the argument
               | though.
               | 
               | MinIO by itself is fantastic software for sure, but
               | running storage appliances is shades of difficult or
               | expensive.
               | 
               | On the one hand you can buy a netapp filer and support
               | plan which is basically as self-healing as the cloud is,
               | netapp will send a human to go replace failed drives
               | before you even know a fault is imminent.
               | 
               | On the other, that's expensive but running your own is
               | complex if you're not setup to do it already.
               | 
               | MinIO is but one part of a data storage puzzle. Though a
               | very important one.
        
               | [deleted]
        
               | neeleshs wrote:
               | The value prop of AWS/GCP etc go beyond bare metal vs
               | VMs. Running HA databases (even if you are a startup, you
               | need this), centralized logging, secrets manager, KMS,
               | making sure disks are encrypted, something like pub/sub
               | as a messaging backbone for your application, load
               | balancers... the list goes on.
               | 
               | Huge difference b/w reading a few pages of documentation
               | on secrets manager, and clicking a button to get the
               | service on vs deploying, and maintaining a vault on bare
               | machines.
        
               | Draiken wrote:
               | Yet most of these aren't needed for small crews... do I
               | care that my DB is not HA when I have 1 RPS? You said
               | yes, but I disagree. Do I need centralized logging if I
               | have a few servers? It's already centralized for free.
               | 
               | It's much easier to setup and forget basic bare metal
               | servers with PG/NGINX and whatnot, than it is to automate
               | using dozens of AWS services.
               | 
               | People pretend that AWS doesn't cost engineers to run it,
               | when it's IMO basically the same human cost, if not
               | bigger as complexity grows. You just don't pay that cost
               | upfront, but you sure do pay it later with interest.
               | 
               | You get stuff like HA but that's not free. You also now
               | have to manage a new boatload of services, scripts,
               | changing APIs, etc.
        
               | neeleshs wrote:
               | YMMV, I run a startup that serves critical customer use
               | cases, and from day 1 I had to care that my DB was HA,
               | backed up, and could be restored. Actually had a case
               | early on where we had to test this because I accidentally
               | deleted customer data and had to restore - took all of
               | one click, with no scripts, no prior investment into
               | backup management etc. We also had to go through infosec
               | reviews from day 1, and many of these are a must have.
               | Doesn't matter if it's a small crew or large. Our
               | customers care.
               | 
               | 3 years fast forward with hundreds of customers, and we
               | haven't had a need to hire someone full time to "manage"
               | these services. This may/will change down the line, but I
               | couldn't have built my business without one of these
               | cloud vendors.
               | 
               | Edit: I also know the pain of building (and managing)
               | this on bare metal (last company did that) - and it was
               | just apache/php/mysql on bare metal. It was a mess
        
               | MajimasEyepatch wrote:
               | Not to mention compliance. If you have to go through
               | third-party audits (and most businesses do after reaching
               | a certain size), then using standard services from a
               | public cloud provider can simplify things significantly.
               | Sometimes that's because you can just offload "security
               | of the cloud" to the cloud provider, and other times it's
               | because there are well-established guidelines for how to
               | achieve, say, PCI compliance on AWS.
        
           | jjav wrote:
           | > have to hire
           | 
           | But [to state the obvious but sometimes overlooked] you don't
           | just point an AWS account at the company git repo and walk
           | away.
           | 
           | There's a lot of work and expertise needed to keep AWS setup
           | up and running, so you already have to hire people.
           | 
           | At a modest size startup we already have close to ten people
           | DevOps team to manage AWS. That same size team could easily
           | keep bare metal servers running. At our scale it's still a
           | bit cheaper to be on AWS, but not too far in the growth curve
           | it'll start to become cheaper to be on bare metal.
        
             | antod wrote:
             | Yup, I worked for a company that moved everything from colo
             | to AWS. Not only did the annual tech costs increase about
             | 10x (even with all the AWS migration credit funny math),
             | but they ended up with double the number of engineers to
             | look after it.
             | 
             | Admittedly this was hopefully only going to apply for a few
             | years until they finished rearchitecting everything, but I
             | doubt it would ever reduce back down to anywhere near the
             | original costs.
             | 
             | The equation is also mirrored for something that started
             | and grew on AWS though - going to bare metal means building
             | up tooling and processes you didn't have before. The
             | transition will be expensive in either direction.
        
             | doctor_eval wrote:
             | Agreed. I was doing some unrelated research yesterday and
             | stumbled upon the fact that apparently (in Australia) 41%
             | of companies see an _increase_ in IT staff after adopting
             | cloud services. [1]
             | 
             | I thought this was pretty weird since the original value
             | proposition (AFAIR) was to reduce costs/head count. But
             | everywhere I've worked that used AWS, had specialists
             | employed to manage AWS. And I think the value proposition
             | is more that it's easier to find people who know AWS -
             | because there are training providers and certifications -
             | than people who know how to do everything themselves.
             | 
             | Once you get to a certain size I think you can attract a
             | team who can build out rather than buy in, and in so doing
             | reduce costs.
             | 
             | [1] https://www2.deloitte.com/au/en/pages/economics/article
             | s/eco...
        
         | systemvoltage wrote:
         | The initial bit is important though. It creates a circular
         | dependency. If you start out without AWS, your entire company's
         | software from how you build monoliths/microservices/queues
         | changes.
         | 
         | Look at Stack Overflow's architecture which stands apart
         | _because_ it was never designed to work in cloud from the
         | beginning: https://stackexchange.com/performance
         | 
         | I'd argue that 90% of the SaaS doesn't have SO's scale. The
         | whole thing would work just fine on a couple of FreeBSD servers
         | running postgres and un-dockerized monolith. Half a rack at
         | most with redundancy and replication.
         | 
         | But, if you've built your whole company around proprietary
         | lamda functions and a vast range of AWS offerings, you're
         | setting up yourself to never get out of the mess.
        
           | jacobsenscott wrote:
           | Yes - but the FOMO industry is too strong.
        
         | [deleted]
        
         | outworlder wrote:
         | Thank you!
         | 
         | At work, people keep complaining about our costs and coming up
         | with spreadsheets showing how much money we would save with our
         | own hardware.
         | 
         | They never add the engineering costs. When they do, they forget
         | to include the ongoing maintenance. Or the new SMEs that need
         | to be hired (and on call). Or even the opportunity cost of
         | doing a multi-year migration to arrive at the exact spot they
         | already are today.
         | 
         | All that money, and noone is looking into optimizing our
         | systems to shrink the bill...
        
         | i_have_an_idea wrote:
         | Anecdotal, but for one of my projects, Google Cloud / Compute
         | Engine VMs cost around ~$5k a month all in. The exact same
         | setup, when we moved it to LiquidWeb, cost us $2k.
         | 
         | Don't underestimate the savings that can be made from switching
         | from a big-name cloud provider to a more old school hosting
         | provider.
        
       | seattle_spring wrote:
       | The company I used to work for. They successfully did cut server
       | costs!
       | 
       | ...at the expense of 40 eng-years (20 eng over 2 years) spent on
       | the migration.
        
       | gibsonf1 wrote:
       | We've just finished moving servers from AWS to
       | https://hetzner.com - and saved 10X with servers of double the
       | capability. A great experience so far.
        
         | jacooper wrote:
         | How did you get them to approve a large amount of Cloud
         | instances / dedicated servers?
         | 
         | I heard they are very stubborn to increase the per user limit
         | of cloud instances.
         | 
         | Also how did you deal with S3? Did you switch to another
         | provider ? Like B2?
        
           | gibsonf1 wrote:
           | We haven't run into the number of servers issue. For S3,
           | we've switched to Wasabi which very nicely uses the identical
           | API.
        
             | jacooper wrote:
             | Honestly Wasabi's advantage is free egress, but with b2 +
             | cloudflare gives a better billing system without the three
             | month object retention thing and a CDN with no egress
             | limit.
        
       | wglass wrote:
       | I found the headline to be misleading. The article is mostly
       | about the migration process (which is interesting), but very
       | little about the details of the cost savings.
       | 
       | What does it cost to run their data center? What are the salaries
       | they are paying for internal IT efforts to administer it? Is it
       | an apples-to-apples comparison, e.g. are they load balancing
       | across multiple datacenters in case of an outage?
       | 
       | It sounds like this was a good move for Prerender but it's hard
       | to generalize the cost claims to other situations without
       | details.
        
       ___________________________________________________________________
       (page generated 2022-09-28 23:02 UTC)