[HN Gopher] Aurora I/O optimized config saved 90% DB cost
       ___________________________________________________________________
        
       Aurora I/O optimized config saved 90% DB cost
        
       Author : fosterfriends
       Score  : 58 points
       Date   : 2023-08-10 18:29 UTC (4 hours ago)
        
 (HTM) web link (graphite.dev)
 (TXT) w3m dump (graphite.dev)
        
       | ignoramous wrote:
       | > _We reached out to some contacts at AWS to find out why the
       | Aurora team built this. Did I /O Optimized do some clever
       | engineering with sharding and storing data in S3? Were they just
       | feeling generous?_
       | 
       | No surprises here. Come what may, Amazon has always strived to
       | lower costs (the low costs, more customers, more volume fly-
       | wheel?). This is but one example.
       | 
       | AWS adopted _cost-follow_ pricing for S3 (very different from
       | value-based pricing) after apparently a lengthy debate: As they
       | get more efficient, they want to pass down those savings to
       | customers (as price reductions):                 S3 would be a
       | tiered monthly subscription service based on average storage use,
       | with a free tier. Customers would choose a monthly subscription
       | rate based on how much data they typically needed to store.
       | Simple ... The engineering team was ready to move on to the next
       | question.            Except that day we never got to the next
       | question. We kept discussing this question. We really did not
       | know how developers would use S3 when it launched. Would they
       | store mostly large objects with low retrieval rates? Small
       | objects with high retrieval rates? How often would updates happen
       | versus reads? ... All those factors were unknown yet could
       | meaningfully impact our costs ... was there a way to structure
       | our pricing [to] ensure that it would be affordable to our
       | customers and to Amazon?            ... the discussion moved away
       | from a tiered subscription pricing strategy and toward a cost-
       | following strategy. "Cost following" means that your pricing
       | model is driven primarily by your costs, which are then passed on
       | to your customer. This is what construction companies use,
       | because building your customer's gazebo out of redwood will cost
       | you a lot more than building it out of pine.            If we
       | were to use a cost-following strategy, we'd be sacrificing the
       | simplicity of subscription pricing, but both our customers and
       | Amazon would benefit. With cost following, whatever the developer
       | did with S3, they would use it in a way that would meet their
       | requirements, and they would strive to minimise their cost and,
       | therefore, our cost too. There would be no gaming of the system,
       | and we wouldn't have to estimate how the mythical average
       | customer would use S3 to set our prices.
       | 
       | From: https://archive.is/lT5zT
       | 
       | I wonder what explains AWS' high egress costs, though.
        
         | Nextgrid wrote:
         | > I wonder what explains AWS' high egress costs, though.
         | 
         | Vendor lock-in. It prevents people from otherwise picking the
         | best provider for the task at hand - for example using some
         | managed AWS services, but keeping the bulk of your compute on-
         | prem or at a (much cheaper) bare-metal host.
         | 
         | It makes sense, but I wish there was an option to opt-out, as
         | to allow high-bandwidth applications that are fully on AWS (at
         | the moment AWS is a non-starter for many of those even if you
         | have no intention of using AWS competitors like in the scenario
         | above).
         | 
         | Maybe they should just price end-user egress vs competitor
         | egress differently (datacenter and business provider IPs are
         | priced like now, but consumer-grade provider IPs are much
         | cheaper/free)? That would discourage provider-hopping, while
         | making AWS a viable provider even for high-bandwidth
         | applications such as serving or proxying media.
        
           | Szpadel wrote:
           | Explanation makes sense, except cross availability zone
           | traffic is also expensive and by this logic it should not be.
        
             | twoodfin wrote:
             | It's presumably expensive because there's a lot less inter-
             | AZ bandwidth than there is intra-AZ bandwidth for the
             | obvious reasons.
        
             | hughesjj wrote:
             | IIRC some cross az traffic uses vendors' fiber (albeit
             | definitely encrypted) but I could be completely wrong.
        
         | pahkah wrote:
         | > No surprises here. Come what may, Amazon has always strived
         | to lower costs
         | 
         | Maybe it's just a goal to reduce costs, but it seems likely
         | that this is a response to Google's introduction of AlloyDB, a
         | Postgres-compatible database competing with Aurora that is
         | advertised as having "no...opaque I/O charges". I doubt Amazon
         | was feeling generous.
        
         | victor106 wrote:
         | > Come what may, Amazon has always strived to lower costs
         | 
         | Agreed.
         | 
         | Azure on the other hand is notoriously expensive and do
         | everything possible to raise prices.
        
       | BrentOzar wrote:
       | Cut ours by over 40% too:
       | https://www.brentozar.com/archive/2023/06/aws-aurora-cut-our...
        
         | fosterfriends wrote:
         | wild part is that there's been zero observable difference in
         | performance for this config change, I think it's just a
         | difference in billing calculation.
        
           | EwanToo wrote:
           | Our commit latency dropped when we turned it on, so
           | _something_ changed on a technical level
        
           | avisser wrote:
           | > I think it's just a difference in billing calculation
           | 
           | Agree. It feels like an actuary ran some numbers and found
           | that high I/O customers spend more on other AWS services and
           | are more profitable than low I/0 tenants, so they changed the
           | formula to reduce churning those high I/0 tenants.
        
       | immibis wrote:
       | Customer buys cloud service with unpredictable pricing. Gets
       | bitten by high prices. Switches to (new!) cloud service with
       | predictable pricing. Prices are lower.
       | 
       | Also, it sounds like they could save money by not using cloud.
        
         | fosterfriends wrote:
         | For us, it would be net more expensive because we need to
         | factor in the labor cost of an engineer maintaining a non-cloud
         | DB.
        
         | Etheryte wrote:
         | Everyone who is using cloud could save money by not using
         | cloud. However sometimes the money is worth it for the features
         | and guarantees you get.
        
           | immibis wrote:
           | But _especially_ everyone who is using cloud with
           | unpredictable pricing models. Unpredictable means it 's
           | designed to screw you and make sure you don't have any
           | grounds to complain.
        
       | zgluck wrote:
       | Does anyone have any experiences to share about the performance
       | differences between regular RDS/MySQL and Aurora for MySQL (I/O
       | optimized config)?
       | 
       | (The article is about PostgreSQL.)
        
       | bearjaws wrote:
       | I highly recommend communicating with your business support team
       | at AWS.
       | 
       | Mine have always been helpful and have kept us up to date on
       | releases like this, sometimes we even get them before they are
       | GA.
        
       | nusmella wrote:
       | We have a 10TB database we switched from Aurora to Postgres and
       | it cut out bill by 80%. However, there are some differences in
       | our schema such as now using native partitions so it's hard to
       | tell how much $ is due to the switch and how much due to our
       | table and query design.
       | 
       | We have a similar story with DynamoDB too.
        
       | zuckerborg0101 wrote:
       | I wished I knew about this!! Migrating my Aurora clusters rn
        
       | pbowyer wrote:
       | Where can you see how much of your Aurora spend is on I/O? There
       | must be a Cloudwatch metric somewhere
        
         | epberry wrote:
         | If you hook up https://vantage.sh you can filter by RDS costs >
         | Cost by Category on a Cost Report to see this.
         | 
         | Disclaimer: I work for Vantage.
         | 
         | PS. I maintain https://ec2instances.info/rds and recently added
         | support for Aurora I/O Optimized pricing.
        
         | fosterfriends wrote:
         | Im a big fan of Datadog's Cloud Cost management:
         | https://www.datadoghq.com/product/cloud-cost-management/
         | 
         | Outside of that though, you can go to your AWS billing, filter
         | on RDS, and slice by usage type. They make you dig, which is
         | why I pref Datadog
        
       | femiagbabiaka wrote:
       | Wow, good on AWS for releasing such a useful feature. It does
       | feel a little uncomfortable to have to wait on an interested
       | party to make optimizations to benefit you (to the tune of 90%
       | cost savings!!), but I guess that's the name of the game in
       | public cloud.
        
       | timbaboon wrote:
       | I work for a large insurance company. We had moved from on-prem
       | to Aurora, but our design strategy did not get updated in the
       | process, and our IO costs were about 80% of our total AWS bill.
       | We just switched to this new IO optimised pricing and we're
       | seeing huge discounts. I can sleep a bit easier now :) (we still
       | are going to change our DB design to reduce IO anyway)
        
       | akokanka wrote:
       | It's so amazing when CTO cares about technology and not only
       | shareholders return and gains.
        
         | immibis wrote:
         | You're thinking of the CEO.
        
       | thecarissa wrote:
       | Greg earned a new espresso machine for his efforts here!
        
         | fosterfriends wrote:
         | Slowly rebuilding Airbnb's Okay Coffee club...
        
       | jjice wrote:
       | I didn't know about Aurora IO Optimized until just now. That
       | solves my biggest fear of Aurora which is an optimized query
       | wreaking havoc on our IO and racking up a bill. Very cool
       | offering to see.
        
       ___________________________________________________________________
       (page generated 2023-08-10 23:00 UTC)