[HN Gopher] Cloudflare R2 storage: Rapid and reliable object sto...
       ___________________________________________________________________
        
       Cloudflare R2 storage: Rapid and reliable object storage, minus the
       egress fees
        
       Author : geostyx
       Score  : 480 points
       Date   : 2021-09-28 13:01 UTC (10 hours ago)
        
 (HTM) web link (blog.cloudflare.com)
 (TXT) w3m dump (blog.cloudflare.com)
        
       | lokl wrote:
       | Can you use R2 to host a static website as easily as you can with
       | S3?
        
         | BlackMiracle wrote:
         | To host static websites, you use Cloudflare Pages.
        
           | lokl wrote:
           | Thanks, I am familiar with Pages, but I would like to know
           | whether R2 can also be used for this purpose, in a similar
           | fashion as S3.
        
             | RulerOf wrote:
             | Indeed, the Browser -> CDN -> Object storage static hosting
             | model all under one roof is extremely appealing.
        
           | segphault wrote:
           | Cloudflare Pages locks you into git-based deployment, which
           | isn't always practical, especially for sites that are heavy
           | with images and other non-text static assets. I don't want to
           | use git to manage the images for my website and I don't want
           | to have to pay Github every month for Git LFS storage and
           | bandwidth costs.
        
             | DenseComet wrote:
             | You can take a look at Workers Sites instead. Similar to
             | their Pages solution, but with a lot more flexibility.
        
         | jl6 wrote:
         | This (combined with the free egress) would be an utterly killer
         | app for sites that just want to publish a few simple static
         | files.
        
         | greg-m wrote:
         | Sure, you could just setup a Worker in front of a bucket. This
         | is very similar to how the original Workers Sites worked.
        
       | sudhirj wrote:
       | I know the Cloudflare team hangs out here, so thanks, and great
       | job! This was absolutely necessary for my line of work. Couple of
       | quick questions/confirmations:
       | 
       | * R2 will support the same object sizes as S3? We have 500GB+
       | objects and could go to a 1TB per object. * R2 will support HTTP
       | Range GETs, right?
       | 
       | Egress bandwidth for objects on S3 is the biggest line item on
       | the AWS bill for a company I work for, by an order of magnitude,
       | and this will just wipe it off for the most part.
        
         | greg-m wrote:
         | Hey, I'm the PM for R2.
         | 
         | Yes to range requests. Current object limit is smaller than
         | that, but we don't have a fundamental restriction there. Shoot
         | me an email to gmckeon [at] cloudflare.
        
           | busymom0 wrote:
           | Really excited for R2 Storage. I am wondering if R2 can solve
           | a limitation I was running into with BackBlaze.
           | 
           | I had tried using BackBlaze 8 months ago as a much cheaper
           | (especially with B2 and CF's Free Data Transfer partnership)
           | replacement for Amazon S3 and was running into a limitation
           | on B2.
           | 
           | I had a scenario where my users can upload images from the
           | browser to BackBlaze. I wanted the ability to control the
           | file name of the uploaded file. I don't want the user to be
           | able to modify the network request to upload the file with a
           | different file name. Nor do I want the users to be able to
           | upload files with names which would overwrite existing files.
           | 
           | B2 didn't let me upload files with a specific filename in the
           | pre-signed URL.
           | 
           | For example there's this API:
           | 
           | curl
           | https://apiXXX.backblazeb2.com/b2api/v2/b2_get_upload_url -H
           | 'Authorization: XXXXXXXXXXXXXXXXXXXXXXXXX=' -d '{"bucketId":
           | "XXXXXXXXXXXXXXX"}'
           | 
           | which gave me an upload URL where I can upload the file to.
           | 
           | And then there's this:
           | 
           | https://www.backblaze.com/b2/docs/b2_upload_file.html
           | 
           | which lets users upload files to that URL.
           | 
           | But this allowed my users to upload file with any name they
           | want. It would also allow them to overwrite existing files
           | (from other users).
           | 
           | My question is more from a security point of view so
           | preventing one user from overwriting another user's content
           | is crucial. For example, lets say you right click on an image
           | from someone else on facebook and get the actual image's file
           | name. Now you try to upload an image on facebook and you edit
           | the network request in the browser's inspector tool to the
           | image file name which you got for another user. Facebook
           | obviously prevents this in their own way using pre-signed
           | urls which include the filename in the signature. However on
           | BackBlaze if I try this, the "pod" url which is received
           | doesn't include any file name signature. The pod URL is just
           | where the image gets stored on your end. A user can easily
           | edit the network request and modify the "X-Bz-File-Name"
           | Header to another user's filename. This would be a major
           | security vulnerability if I went with BackBlaze. As a
           | workaround, right now it seems like users would first have to
           | upload files to my own server, then my server would have to
           | upload them to BackBlaze to avoid this issue. This sounded
           | like hassle.
           | 
           | Amazon S3 solves this problem using createPresignedPost which
           | includes a signature of the filename in the URL. I contacted
           | BackBlaze's support and got a response their S3 api doesn't
           | support createPresignedPost:
           | 
           | https://www.reddit.com/r/backblaze/comments/kzszym/is_backbl.
           | ..
           | 
           | You can read B2's staff's response to my question:
           | 
           | https://www.reddit.com/r/backblaze/comments/l0c9s7/is_there_.
           | ..
           | 
           | Is there a way to prevent this on R2? Something where the
           | link provided by b2_get_upload_url (whatever R2's equivalent
           | will be) only works for specific a file name?
        
           | mnw21cam wrote:
           | I'd say definitely increase that limit. I have a 4TB file
           | hanging around over here, I'm sure others have bigger ones.
        
             | greg-m wrote:
             | we will :) we're working to reach 5 GB per upload, with
             | multi-part support.
        
               | danielrhodes wrote:
               | I personally think uploading massive files is not usually
               | desirable. Better would be an easy way to chunk it and
               | upload and have the server put the file back together,
               | which would increase reliability.
        
               | zackbloom wrote:
               | Native tus support would be pretty amazing.
        
               | anderspitman wrote:
               | Tus is over-complicated IMO. Why require the server to
               | store state that's already available on the file system?
               | Just support PATCH requests with an offset parameter. If
               | an upload fails partway through, HEAD the file to see how
               | much got copied to disk and resume from there.
        
       | barefeg wrote:
       | What about the AWS bandwidth costs of accessing cloud fare's data
       | over the internet?
        
       | kondro wrote:
       | I'm curious what read and write request prices will be.
       | 
       | Like egress pricing, S3 starting at $5/million writes and $0.40
       | per million reads feels excessive.
        
       | StratusBen wrote:
       | Really curious to see how this goes. If they live up to the
       | following paragraph, that's pretty game-changing:
       | 
       | >> This cheaper price doesn't come with reduced scalability.
       | Behind the scenes, R2 automatically and intelligently manages the
       | tiering of data to drive both performance at peak load and low-
       | cost for infrequently requested objects. We've gotten rid of
       | complex, manual tiering policies in favor of what developers have
       | always wanted out of object storage: limitless scale at the
       | lowest possible cost.
       | 
       | The amount of effort it takes to understand and account for S3
       | Intelligent Tiering is somewhat mind-blowing so to get rid of all
       | of that (and the corresponding fees) would be really nice and
       | TheWayThingsShouldBe(tm) for the customer -- on top of that most
       | users just don't even know S3 Intelligent Tiering exists so it'll
       | be great if Cloudflare just handles that automatically.
       | 
       | We at https://vantage.sh/ (disclosure, I'm the Co-Founder and
       | CEO) recently launched a cross-provider cost recommendation for
       | CloudFront Egress to Cloudflare which was really popular and I
       | can imagine doing something similar for S3 -> R2 once it is live
       | and we are able to vet it.
        
         | blackandsqueaky wrote:
         | Does Vantage offer comparisons for Backblaze B2, OVH etc?
         | 
         | When looking at object storage, tail latency is probably the
         | single most overlooked metric, and the most material
         | differentiator between providers after bandwidth costs. Don't
         | sweat the cent spent on storing an object, worry about the cost
         | of the 6,000,0000 copies of it you'll ship after it's stored.
         | 
         | As for bandwidth, CloudFlare becomes uninteresting the moment
         | your account starts to see any real consumption, even AWS are
         | easier to negotiate with.
        
           | StratusBen wrote:
           | We are working a more holistic approach to ingesting and
           | providing recommendations from companies like Backblaze, OVH,
           | etc. in addition to many, many more providers for other AWS
           | services. The goal being that we can give customers the tools
           | they need to get visibility on what options exist and take
           | action themselves from there.
        
       | otar wrote:
       | Yes! This is what I was predicting about a week ago on a HN:
       | https://news.ycombinator.com/item?id=28564387
        
       | fabian2k wrote:
       | Am I reading this right that this is pretty much aiming to be a
       | full competitor to S3? Or is this a more limited special purpose
       | tool? I didn't really follow what Cloudflare is doing, so I only
       | know them as a CDN. Are private buckets possible with this?
       | 
       | We all knew that the big players are really maximising their
       | profits on the egress charges, so I can see that this left some
       | potential for someone to step in. No egress charges at all still
       | sound a bit too good to be true, but it would be nice as that's
       | just one parameter less to think about.
       | 
       | Another interesting aspect are CloudFlare Workers. As far as I
       | can tell they're not a full replacement for something like AWS
       | Lambda if e.g. I need to do a bit heavier stuff on the data in
       | R2. Being able to do heavier processing close to the actual data
       | would be really interesting as well.
        
       | lewisl9029 wrote:
       | Wow, with global replication by default, this looks absolutely
       | perfect for what I'm currently building.
       | 
       | I'm hoping this means what I think it means, that write latencies
       | will be minimal across the globe, since writes will be persisted
       | and ack'd at the closest region and then eventually consistently
       | propagated to other regions?
       | 
       | If so, curious what would happen in a scenario where a region
       | requests an object that has been persisted at another region but
       | not yet propagated? Will it result in a 404 or is the system
       | smart enough to route the request to the region that has the file
       | at the cost higher latency?
       | 
       | From my research so far into S3's cross region replication, the
       | latter behavior doesn't seem possible out of the box since
       | requests have to specify a single region (S3 experts, please do
       | correct me if I'm wrong), so I'm hoping CloudFlare with its deep
       | expertise in managing a global network can differentiate here.
       | 
       | Also, would love to hear some numbers on what kinds of write
       | latency to expect. From my experience so far, S3 writes for tiny
       | files in a single region take on the order of 50ms ish even for
       | clients in close physical proximity, which is serviceable for my
       | use case, but seems higher than it needs to be (and every little
       | bit I can shave off on latency helps tremendously for what I'm
       | building). Really looking forward to seeing what the CloudFlare
       | solution is capable of here.
       | 
       | Lastly, S3 didn't advertise and guarantee strong read-after-write
       | consistency for same region read/write until late last year. Will
       | R2 offer this out of the gate?
        
       | aclelland wrote:
       | Really interesting, will R2 support lifecycle rules like S3 does?
       | We write around 90 million files per month to S3, if we could
       | replace that with R2 and have the files automatically expire
       | after 30 days that'd be a pretty amazing price reduction for us.
        
         | acdha wrote:
         | They appear to be focused on an automatic version:
         | 
         | > Behind the scenes, R2 automatically and intelligently manages
         | the tiering of data to drive both performance at peak load and
         | low-cost for infrequently requested objects. We've gotten rid
         | of complex, manual tiering policies in favor of what developers
         | have always wanted out of object storage: limitless scale at
         | the lowest possible cost.
        
           | Heliosmaster wrote:
           | GP isn't mentioning lifecycle to change storage tier, but to
           | delete the files after some time.
        
         | greg-m wrote:
         | Hey, I'm the PM for R2.
         | 
         | We support per-object TTLs, so this should work!
        
           | aclelland wrote:
           | Yeah that'd work for us! Just signed up for access :)
        
           | ec109685 wrote:
           | The full lifecycle support s3 has is really powerful. One use
           | case we have for ugc content is disallowing online serving
           | systems the ability to permanently delete an object. Instead
           | the object is marked as deleted and then automatically
           | cleaned up after 30 days.
        
       | RobLach wrote:
       | Impressive carve-out in the object storage market.
        
       | ryanworl wrote:
       | Could you provide some more details on the storage system here?
       | 
       | Is it built on Ceph's S3 compatibility?
       | 
       | Your durability numbers imply erasure coding. Is that the case?
        
         | dikei wrote:
         | > Your durability numbers imply erasure coding. Is that the
         | case?
         | 
         | Well, you can't really build inexpensive, large scale, reliable
         | storage without erasure coding. So that's probably a given.
        
       | whalesalad wrote:
       | I'm trying to figure out the long-con of Cloudflare. Too many
       | free/fantastic products.
        
         | elnygren wrote:
         | It certainly ain't free or cheap if you are running a real
         | business with real traffic and real customers.
         | 
         | However, still totally worth it, easily. Just saying they have
         | a healthy B2B business model.
        
       | marcus_cemes wrote:
       | As someone who took up making travel videos as a hobby, this is
       | definitely on my radar.
       | 
       | Video files are large, although ~20 cents per video streamed for
       | a small website is manageable (S3, Cloud Storage, Azure...), it's
       | the potential for abuse that could drive my bill up that
       | terrifies me, which is why I decided to stick to Hetzner VMs with
       | their 20TB of free egress.
        
         | corobo wrote:
         | Have you seen their Stream product, any use? Came out of their
         | last batch of features if I recall
         | 
         | https://www.cloudflare.com/en-gb/products/cloudflare-stream/
        
           | marcus_cemes wrote:
           | I have, I've also taken a look at Mux. Both would be
           | fantastic options, and I'm still considering them, but I
           | don't have many films and I'm biased towards distributing the
           | original high-quality encodes.
           | 
           | Both of these services significantly reduce the file-size
           | with a re-encode, even if they promote an "impercievable
           | quality loss". They seem to be more suited to high traffic
           | on-demand streaming for websites, promotional material, etc.
        
             | dkh wrote:
             | What format, bitrate, and resolution are your video
             | outputs? Or just a length and file size of one that's
             | handy. (I'm a streaming video engineer and curious.)
             | Reduction in file size from re-encoding doesn't mean there
             | will be perceivable loss in quality. Your videos should
             | almost certainly not require the kind of bandwidth that you
             | mention unless your deliverables are higher-quality than
             | those from, say, Netflix :)
             | 
             | FYI Mux isn't an okayish platform for mid-level use-cases,
             | it's the gold standard for high-performance live streaming
             | and VOD, created by the guys behind a bunch of other stuff
             | pretty central to the streaming industry overall, and is
             | used at the highest level by folks like CBS, Mandolin, etc.
             | Of course you don't have to use it, but it's certainly no
             | toy.
             | 
             | Just curious, does your audience expect to download it in
             | full to watch later or on a USB stick as you describe, or
             | is that a side effect due to the large file sizes?
             | 
             | EDIT: I am not a Mux employee, nor is any of this very
             | important, I've just been working in this space for 10
             | years and very seldom run across something that needs these
             | requirements and I'm curious :)
        
               | marcus_cemes wrote:
               | No worries, I've always been more interested by the
               | technical aspect more than the creative aspect of
               | filmmaking, hence my interest in programming. It's a
               | personal project, I've been making films since the age of
               | 11, although I haven't been able to do much since I
               | started my Bachelor degree...
               | 
               | I encode H.264/AAC for compatibility, usually 2-pass at
               | 8Mbit/s for FHD content or around 30Mbit/s for UHD,
               | 256-320 kbps for AAC. This gives a size of 1-4 GB per
               | video. My Dad worked in broadcast, gave me those figures
               | 10 years ago, I generally stick by them for any work I'm
               | proud of!
               | 
               | You are right, that bitrate absolutely isn't necessary :D
               | , and there are better codecs too. I don't have more than
               | 17 films at the moment, the whole backend costs me about
               | 4 euros a month on Hetzner with an attached 32 GB of
               | block storage, no bandwidth costs (300Mbit/s max),
               | single-core running Node.js and Nginx.
               | 
               | I make films during travel and events, and share them
               | with friends and family who were there and for who it has
               | some intrinsic value. They're mostly personal, not just
               | for consumption. Short compilations that I would like to
               | keep for the future, like old photos. Hence why people
               | watch (and hopefully rewatch!) them on a TV and don't
               | mind the wait.
               | 
               | Buffer-less streaming is absolutely not a priority
               | (although nowadays I think it's more expected, people
               | have shorter attention spans, despite the hours of work
               | that goes into a short travel film). It's a very niche
               | case, but would have cost me at least $50 in bandwidth
               | alone with the big three. It's not going to break the
               | bank, but it's also unnecessary.
               | 
               | You don't usually notice the drop in quality on YouTube
               | or Netflix, until you actually try working with high
               | quality source footage (high-bitrate from a dedicated
               | camera, not phone footage). Then it's very noticeable
               | (purple/green flickering on even surfaces from chroma
               | subsampling, I'm guessing), and makes you sad when you
               | realise what everyone is missing!
               | 
               | If you're still curious, my website is still live. I beg
               | you not to judge, I started making short films very young
               | and never got very good at them either (I'm studying
               | nanotech & engineering, no film career for me)!
               | 
               | https://mastermovies.uk/glacier
        
               | drip-egg-jack wrote:
               | I suggest to use constant quality encoding instead of a
               | constant bitrate, this way encoder will automatically
               | adapt bitrate for a particular scene. This is a much
               | better approach, it will give you better quality and a
               | smaller file at the same time.
               | 
               | For example, encoder might choose 1Mbps bitrate for a
               | scene which is mostly static and 20Mbps for a part of the
               | video with a lot of movement. Your 8Mbps constant bitrate
               | will be an overkill for the first scene and too low for
               | the second. Let encoder decide the optimal bitrate.
        
             | thangngoc89 wrote:
             | Do you expect your viewer to download and not view it
             | directly on a website?
        
               | marcus_cemes wrote:
               | Yes, most people I know who have an interest have slow
               | connections, or watch it on a TV with a USB. It's a
               | unique requirement.
               | 
               | That doesn't mean they can't stream, there is an option
               | to watch it in the browser, they just need a reliable
               | connection (~30 Mbit/s).
        
         | [deleted]
        
       | 2Gkashmiri wrote:
       | uh,, i have to ask. how about using this R2 to host peertube
       | content? it would be somewhat expensive but you are not concerned
       | with egress?
        
       | irq-1 wrote:
       | Request: Use workers KV to coordinate object locks, like a file
       | system. (Well, something better than a file system.)
        
       | bentaber wrote:
       | So the naming strategy is to call it R2, because it comes before
       | S3? I like it
        
         | vira28 wrote:
         | Matthew tweeted "Why R2? Because it's S3 minus the one most
         | annoying thing: egregious egress."
         | 
         | https://twitter.com/eastdakota/status/1442879872154566658
        
       | rpadovani wrote:
       | Nice, and it seems to be quite cheap as well. It's unfortunate,
       | tho, they don't talk about data residency: where are servers
       | located? Where will my data be copied?
        
         | nanankcornering wrote:
         | It's still in development, so no details about that yet. Maybe
         | their top tier DCs?
         | 
         | "Our vision for R2 includes multi-region storage that
         | automatically replicates objects to the locations they're
         | frequently requested from."
        
         | jgrahamc wrote:
         | What would you like the answer to be? Do you need regional
         | control over data storage like we do with this
         | https://blog.cloudflare.com/supporting-jurisdictional-
         | restri...?
        
           | rpadovani wrote:
           | Yes, definitely, EU regulated industries need to be 100% sure
           | to keep data in EU datacenters
        
             | eastdakota wrote:
             | We plan to add support for data residency requirements on
             | an object-by-object basis. In other words, if you need one
             | object to stay in the EU and another to stay in the US and
             | a third to stay in India you tag each object using the same
             | API and we handle ensuring the residency requirements
             | behind the scenes.
        
           | yawnxyz wrote:
           | we just got this big grant that works with healthcare and
           | genomic data for a new type of therapy in Australia, so we
           | have jurisdiction requirements (data needs to stay within
           | Australia, some data needs to stay within New South Wales).
           | We're currently talking with your run of the mill providers,
           | but I'd pretty excited to try this out with Cloudflare
           | instead... esp. when previous similar projects have been hit
           | with some nasty egress costs
        
       | zoso wrote:
       | Seems like a cheap option to backup my photos (just need to think
       | about the reliability/upload multiple times to different
       | regions).
        
       | orf wrote:
       | I see the PM for this product is here, a few things we find
       | useful with S3:
       | 
       | - Daily inventory report of the contents of the bucket in parquet
       | format
       | 
       | - Signed URLs
       | 
       | - Lifecycle policies based on tags (but to be honest just a
       | policy that isn't restricted to suffix/prefix would be amazing)
       | 
       | - Bucket policies (restrict object key structure)
       | 
       | Lots of these work well enough in AWS but are often lacking in
       | some regards with annoying restrictions.
       | 
       | Looks like an amazing product, good luck!
        
       | pier25 wrote:
       | This is really a game changer for storage. Well done Cloudflare.
       | 
       | How will other providers respond to this now?
       | 
       | AWS, GC, and others do not really pay for egress charges
       | themselves. Those super high egress charges are pretty
       | ridiculous.
        
         | js4ever wrote:
         | Yes and I hope this will force AWS & co to end this egress
         | madness
        
       | cloudfalcon wrote:
       | Having done hundreds of TCO analyses for customers moving object
       | storage providers, this seems like it carves out a very
       | interesting niche for Cloudflare. R2's higher storage costs
       | (roughly triple) also make it a more manageable threat to
       | specialized solutions like Storj DCS, Wasabi and Backblaze B2.
       | 
       | At Taloflow (https://www.taloflow.ai), (disclosure: I'm the
       | CEO/Cofounder) we provide buying insights for cloud object
       | storage (and soon other IaaS/PaaS). We will definitely be adding
       | Cloudflare R2 to the mix.
        
       | alberth wrote:
       | This is fantastic, I only need now a Cloudflare RDMS to run my
       | entire business on Cloudflare.
       | 
       | (Workers KV is great but there's a ton of times that you just
       | need an actual relational database)
        
         | xrendan wrote:
         | Depending on your use-case (if it's read heavy), you could use
         | SQLite hosted on R2 being read by workers
        
         | greg-m wrote:
         | Durable Objects is as close as we come today, but we still have
         | two innovation weeks left this year.
        
       | Shadonototra wrote:
       | > $0.015 per GB of data stored per month
       | 
       | we are still living in 2007?
        
         | frakkingcylons wrote:
         | You're misreading that as $0.15 per GB.
        
       | ttobbaybbob wrote:
       | Nice to see someone flipping the script and encroaching on AWS'
       | territory rather than vice a versa
       | 
       | Taking the have-a-much-better-product route to siphoning use from
       | AWS is particularly ambitious. I hope it works out. AWS has had
       | it a little too easy for too long
        
         | drip-egg-jack wrote:
         | Actually many companies are trying/tried to compete with AWS
         | especially in an egress price territory but because of a lack
         | of exposure it is very hard to get a traction and get big
         | enough.
         | 
         | And there is a lot of things that can be improved - like egress
         | price, better support, get rid of regions completely, etc...
        
       | ignoramous wrote:
       | > _As transformative as cloud storage has been, a downside
       | emerged: actually getting your data back... When they go to
       | retrieve that data, they 're hit with massive egress fees that
       | don't correspond to any customer value -- just a tax developers
       | have grown accustomed to paying._
       | 
       | Strategy Letter V, commoditize your competitor's advantages!
       | 
       | > _We've gotten rid of complex, manual tiering policies in favor
       | of what developers have always wanted out of object storage:
       | limitless scale at the lowest possible cost._
       | 
       | Cloudflare has a clear strategy: Be the simplest cloud platform
       | to deploy to. It has been a breeze as a small dev shop adopting
       | their tech. AWS started with the startups, but have since long
       | struggled to keep up that simplicity in face of supporting what
       | must be a dizzying array of customer requirements. Remains to be
       | seen how Cloudflare fares in that regard. I like my Golang better
       | than Rust.
       | 
       | > _Cloudflare R2 will include automatic migration from other
       | S3-compatible cloud storage services. Migrations are designed to
       | be dead simple._
       | 
       | Taking a leaf out of Amazon Data Migration Service and its free
       | transfers from elsewhere into RedShift/RDS/Aurora/OpenSearch.
       | Niice.
       | 
       | > _...we designed R2 for data durability and resilience at its
       | core. R2 will provide 99.999999999% (eleven 9's) of annual
       | durability, which describes the likelihood of data loss... R2 is
       | designed with redundancy across a large number of regions for
       | reliability._
       | 
       | S3 goes upto 16 9s with cross-region replication... and so
       | wondering why R2's still at 11 9s? May be the mutli-region
       | tiering is just acceleration (ala S3 Accelerated Buckets) and not
       | replication?
       | 
       | > _...bind a Worker to a specific bucket, dynamically
       | transforming objects as they are written to or read from storage
       | buckets._
       | 
       | This is huge, if we could open objects in append-mode. Something
       | that's expensive to do in S3 (download -> append -> upload) even
       | after all these years.
       | 
       | > _For example, streaming data from a large number of IoT devices
       | becomes a breeze with R2. Starting with a Worker to transform and
       | manipulate the data, R2 can ingest large volumes of sensor data
       | and store it at low cost._
       | 
       | Okay, where do I sign-up?
       | 
       | > _R2 is currently under development..._
       | 
       | Oh.
        
         | fideloper wrote:
         | Being able to append to an object would be amazing!
        
       | sneak wrote:
       | What's the censorship policy?
       | 
       | Is this going to be content-neutral, like Cloudflare was when
       | fronting ISIS websites?
       | 
       | Or is this going to be fine-until-bad-PR, like when Cloudflare
       | decided to stop hosting The Daily Stormer?
       | 
       | There is a special kind of lock-in when it comes to object
       | storage, as generally you use something like this when the data
       | is too big to store another copy of locally or at another
       | provider. It's not like you can easily maintain provider
       | independence, and if Cloudflare decides one day that some of your
       | UGC in a bucket isn't something they want to host, what happens
       | then?
       | 
       | Is the data lost forever because your account is nuked? Is there
       | a warning or grace period?
       | 
       | I am hesitant to put any large amount of data into a service
       | without a crystal clear statement on this, so that I can know up
       | front whether or not a business needs to maintain a second,
       | duplicate object store somewhere else for business continuity.
       | 
       | If Cloudflare in practice is going to nuke the account the moment
       | your site ends up hosting something objectionable, this DR
       | requirement (a second provider that also stores all objects)
       | needs to be factored into a customer's costs. (It may be that the
       | bandwidth savings still make it worth it to use Cloudflare even
       | with double storage.)
        
         | chromatin wrote:
         | Well, it is in feature preview so this is probably not defined.
         | 
         | You raise an interesting point regarding object storage being a
         | relatively unique type of lock-in.
        
         | celsoazevedo wrote:
         | > I am hesitant to put any large amount of data into a service
         | without a crystal clear statement on this, so that I can know
         | up front whether or not a business needs to maintain a second,
         | duplicate object store somewhere else for business continuity.
         | 
         | It's a mistake to rely on a clear statement when you can't
         | afford to lose your data. Stuff happens all the time...
         | mistakes, malware, an expired credit card, etc. Independently
         | of the provider you decide to use, I'm not sure if a backup is
         | optional in your case.
        
       | js4ever wrote:
       | Amazing, this is exactly what I was looking for, an S3 compatible
       | provider with no egress cost ... And reliable. I can't wait to
       | try Cloudflare R2!
        
       | dgllghr wrote:
       | Does it have read after write consistency? And does it have it
       | across regions?
        
         | greg-m wrote:
         | Yes, we have read after write consistency across regions today.
         | We're considering relaxing that if we need to reduce latency
         | (one of the things to be worked out in beta!).
        
       | xfalcox wrote:
       | I have been doing work on ensuring my app is compatible with the
       | many Object Storage services. Would be great to get access to
       | this early and make it compatible too.
        
       | sdfhbdf wrote:
       | Very exciting. Object storage is getting really competitive and i
       | love the naming scheme alliance - S3, B2 (Backblaze) and now R2,
       | who will do one with a "1"?
       | 
       | On a serious note i'm wondering about the signed urls and ACL
       | capabilities of the cloudflare offering cause this is something
       | we use.
       | 
       | I'm also interested does R2 replace S3 and CloudFront at the same
       | time? That'd be nice and one headache less.
        
         | drip-egg-jack wrote:
         | > Our vision for R2 includes multi-region storage that
         | automatically replicates objects to the locations they're
         | frequently requested from.
         | 
         | Seems like they are going to automatically replicate data to
         | other regions. Something like tebi.io is doing for a long time
         | already, it is a geo-distributed S3-compatible storage that is
         | replicating data across the world to reduce read/write latency
         | and increase speed.
         | 
         | If it is done right, this might increase download speeds by a
         | lot especially for big infrequently accessed files.
        
         | slim wrote:
         | Q1
        
         | sfeng wrote:
         | The Cloudflare offering supports their serverless Workers
         | compute which would make signed URLs and ACLs trivial.
         | Cloudflare certainly would also be replacing CloudFront.
        
       | rad_gruchalski wrote:
       | This is fantastic. This + workers has so much potential to shake
       | up the market of serverless compute!
        
       | zacincceo wrote:
       | $15 is too expensive, especially if you want to follow proper
       | 3-2-1 it'll cost you $45/TB
       | 
       | What happens to your Object Storage buckets when Cloudflare has
       | an outage? - https://filebase.com/blog/what-happens-when-my-
       | cloud-goes-do...
        
         | acdha wrote:
         | The claim is that the service handles distribution across
         | reliability, so I think the more interesting question is the
         | odds of that _mechanism_ failing when your site would otherwise
         | be up[1].
         | 
         | Similarly, 3-2-1 is a backup strategy and the pricing appears
         | to already include multiple copies using the same mechanism so
         | the correct calculation would be the cost of R2 plus whatever
         | _different_ mechanism you choose for disaster recovery purposes
         | such as on-premise storage or a completely different provider.
         | 
         | 1. For example, if you use Cloudflare as your CDN / DNS
         | provider and they have a catastrophic failure, the fact that
         | your storage is inaccessible is just another facet of the same
         | issue.
        
         | syshum wrote:
         | What do you mean... The cloud does not need backups /s
        
           | zacincceo wrote:
           | Look up Bucket Replication Policies....
        
             | syshum wrote:
             | clearly the /s is not seen.....
             | 
             | that for those in the back is sarcasm....
        
         | nicoburns wrote:
         | If you want to do proper 3-2-1, shouldn't you be storing the
         | other backups on a different service?
        
           | zacincceo wrote:
           | That's the reason why most enterprises have 4-7 copies of
           | their data..... No inherent geo-replication by default (and
           | as CF shares, it's a "hotel California" problem.... Too
           | expensive to egress completely from AWS.
        
       | truetraveller wrote:
       | >providing zero-cost egress for stored objects -- no matter your
       | request rate
       | 
       | What's the catch? Imagine this simple case: I upload a 100GB
       | video file. I send the link to 100K Twitter followers, who all
       | download this file over the course of a few days. That's 10 PB of
       | bandwidth.
       | 
       | Amazon S3's cost for this would be $900,000, at $90 per TB. Would
       | I pay $0 on R2? And would there be throttling / rate-limiting?
        
         | Strom wrote:
         | Getting 100k people to download 100GB seems like a rather lofty
         | goal. Has this ever been done via social media? The only case I
         | can think of is a game like Call of Duty, which has a ginormous
         | marketing campaign behind it. Even so the download wasn't
         | linked via social media.
         | 
         | Most people don't even have 100k followers. The follower-to-
         | click ratio isn't going to be 100% in any situation. Click-to-
         | full-download ratio is going to be low too, especially so at
         | 100GB. A lot of people don't have that much free space on their
         | devices even!
         | 
         | I think this scenario is firmly in outlier land, and thus not
         | really relevant to cloudflare's calculations.
        
       | BlackMiracle wrote:
       | Can anyone confirm if R2 must be used with Workers and other
       | Cloudflare services only? Is it possible to use as file storage
       | instead of S3?
        
         | greg-m wrote:
         | Yes. We have an in-Worker API to start, but will also make R2
         | available outside of Workers as an API endpoint.
        
         | sudhirj wrote:
         | They've said it supports the S3 API, so yes, you could use it
         | wherever you'd use S3 instead.
        
         | hartator wrote:
         | They say fully compatible with S3 API. I expect workers access
         | to be just a sweet addition.
        
       | numair wrote:
       | There is one major reason S3 remains the king of storage for
       | mobile media uploads: bucket notifications. Does R2 implement
       | this feature? If so, I'm going to have to run some experiments
       | with this...
        
         | sudhirj wrote:
         | To be honest, I think the biggest draw will be for companies
         | (like where I work) that put large objects on S3 and distribute
         | it to hundreds / thousands / millions of customers. The egress
         | direct from S3 is on the order of 8 cents a GB, and with
         | Cloudfront in front of it it's a few cents lower, and you can
         | negotiate pricing a little lower if you're big enough. But not
         | an order of magnitude.
         | 
         | We'd stick R2 in front of an S3 bucket and wipe off a the
         | biggest portion of our bill.
         | 
         | S3 does absolutely have a ton of other stuff like events,
         | lifecycle, Glacier, Lambda etc and is plugged into the AWS
         | ecosystem, so I doubt we're exiting it completely. But this is
         | a solid option as a front for egress from S3.
        
         | greg-m wrote:
         | Hey, PM for R2 here!
         | 
         | We're fully integrated with Workers, so you can write a Worker
         | that calls additional logic when a request is made to the
         | bucket's url.
         | 
         | We have notifications, where a change to the bucket invokes a
         | worker with a specific event, on our roadmap, but have a bunch
         | of other work prioritized ahead of them.
        
           | numair wrote:
           | Hi Greg, thanks for the feedback. It'd be great if you could
           | get around to putting up an example of how to do bucket
           | notifications + lifecycle using Workers as a temporary
           | workaround until it's part of the "core." I don't think I'm
           | the only person with this use case, but maybe I'm more of an
           | edge-case minority than I imagine... In any case, a code
           | library / "recipe collection" (do we still call them
           | cookbooks?) would be great when this launches.
        
       | aenis wrote:
       | Sounds great (nearly too-good-to-be-true great). Wonder how the
       | SLA will look like. I have been using gcs, s3 and firestore - and
       | their actual reliability varies significantly, while advertised
       | slas are similar. For instance, with firestore one has to
       | implement a pretty lenient expotential backoff in case of a
       | timeout, and if the backoff results in the object being retrieved
       | in, say, 2 minutes -- thats still ok as per gcs sla. It obviously
       | makes it hard to use it for user-facing stuff, such as chatbots,
       | where you can't afford to wait that long. In my anecdotal
       | experience of using firestore for about 10 million operations per
       | day, we will usually have a problem like that every few days, and
       | that means user-noticeable failure. It would be great to read
       | more on cloudflare's approach to reliability defined as "99%"
       | percentile max latencuy. Can't wait to give it a try with our
       | workloads.
        
       | rsync wrote:
       | This is interesting ...
       | 
       | For the longest time we tried to convince people that they should
       | have an off-amazon archive of their S3 data ... we even ran an
       | ads to that effect in 2012[1].
       | 
       | The (obvious) reason this isn't compelling is the cost of egress.
       | It's just (relatively) too expensive to offload your S3 assets to
       | some third party on a regular basis.
       | 
       | So if R2 is S3 with no egress, suddenly there is a value
       | proposition again.
       | 
       | Further, unlike in 2012, in 2021 we have _really great tooling_
       | in the form of  'rclone'[2][3] which allows you to move data from
       | cloud to cloud without involving your own bandwidth.
       | 
       | [1] The tagline was "Your infrastructure is on AWS and your
       | backups are on AWS. You're doing it wrong."
       | 
       | [2] https://rclone.org/
       | 
       | [3] https://www.rsync.net/resources/howto/rclone.html
        
         | blackandsqueaky wrote:
         | > So if R2 is S3 with no egress, suddenly there is a value
         | proposition again.
         | 
         | That doesn't appear to be what they're doing, they don't seem
         | to have changed their existing operating model at all:
         | 
         | > R2 will zero-rate infrequent storage operations under a
         | threshold -- currently planned to be in the single digit
         | requests per second range. Above this range, R2 will charge
         | significantly less per-operation than the major providers. Our
         | object storage will be extremely inexpensive for infrequent
         | access and yet capable of and cheaper than major incumbent
         | providers at scale.
         | 
         | What I read this as is "we won't bill you until your traffic
         | spikes, then you'll pay us, oh how you'll pay us"
         | 
         | Transparent bandwidth pricing would be a far more interesting
         | announcement. This is the second post I've seen from CloudFlare
         | in recent months throwing bricks at AWS over bandwidth pricing,
         | while failing to mention CloudFlare bandwidth is some of the
         | most expensive available.
        
           | heipei wrote:
           | This is about the operation, not about bandwidth the way that
           | I read it. All providers have prices for bandwidth and prices
           | for different "tiers" of operations (store, retrieve, delete,
           | list, etc). The way I read it is that bandwidth is always
           | 100% free, and storage operations are free under a certain
           | threshold. I hope I'm right ;)
        
             | eastdakota wrote:
             | This is correct. Bandwidth (ingress and egress) always
             | free, regardless of volume. Transactions free at low volume
             | (~<1/sec) but we'll charge at higher volumes. Storage we
             | charge for. For both transactions and storage, we aim to be
             | at least 10% less expensive than S3. And, again, for the
             | sake of absolute clarity: egress/ingress always free.
        
               | blackandsqueaky wrote:
               | Does this mean we should contact our enterprise account
               | manager regarding our existing spend? For the sake of
               | absolute clarity: we're currently paying for bandwidth
        
           | mdasen wrote:
           | The way I read it is that for low-scale users, they're not
           | going to have request pricing. For higher-scale users, "R2
           | will charge significantly less per-operation than the major
           | providers". AWS charges $0.0004 per thousand GET requests.
           | Let's say that R2 charges $0.0003 per thousand GET requests.
           | That's still cheaper than AWS or Backblaze's B2 (even if just
           | barely) and if they're not charging for bandwidth, then it's
           | really cheap.
           | 
           | The announcement says that they're eliminating bandwidth
           | charges three times.
           | 
           | I don't know the whole economics around cloud storage and
           | bandwidth so maybe this is unrealistic pricing and your
           | suspicions are well founded. However, Backblaze seems is
           | offering storage at $0.005/GB and bandwidth at $0.01/GB.
           | Cloudflare is charging 3x more than Backblaze for the storage
           | and $0 for the bandwidth. Given that Cloudflare's costs are
           | probably lower than Backblaze for bandwidth, that doesn't
           | seem so unreasonable - but I could be very wrong.
           | 
           | I think Cloudflare probably sees R2 as something that is
           | sustainable, but creates demand for their enterprise
           | products. You start The NextBigThing with R2 and suddenly
           | your application servers are under attack. You have a
           | relationship with Cloudflare, you're used to their control
           | panel, you trust them, and when you're at the scale that
           | you're getting attacked like this you can drop $10,000/mo
           | because you're bringing in a bunch of revenue - $10,000/mo is
           | less than 1 software engineer in the US.
           | 
           | R2, in a certain way, can be a marketing tool. "Come use our
           | S3 competitor with free bandwidth rather than getting locked
           | into AWS's transfer pricing." 6-12 months go by and you're
           | substantially larger and want more complex stuff and you're
           | already getting emails from Cloudflare about their other
           | offerings, you see them in the control panel, etc.
           | 
           | It seems like Cloudflare might be trying to move in on AWS's
           | market. R2 is an easy way for them to do it. It seems like S3
           | has high margins. Competing storage services can be a
           | fraction of the cost per GB and AWS's bandwidth markup is
           | incredibly high. If you're looking to attack a competitor's
           | market, it seems like going after one of their highest-margin
           | product could make the most sense. Again, R2 becomes a
           | marketing tool for future cloud offerings.
           | 
           | Part of Cloudflare's strategy might be targeting things that
           | they see very high margins on and being willing to accept
           | lower margins. If something has 50% margins and you're
           | willing to accept 20% margins, you're still doing pretty
           | great. Plus, over time, the cost of hardware comes down and
           | you can keep your prices at the same level once people are
           | happily inside your ecosystem and don't want to deal with
           | migrations.
           | 
           | > CloudFlare bandwidth is some of the most expensive
           | available
           | 
           | It sounds like you might have gotten burned by something with
           | Cloudflare. I don't have any horror stories, but I'm always
           | interested in new data points if you have them.
        
         | dwild wrote:
         | > So if R2 is S3 with no egress, suddenly there is a value
         | proposition again.
         | 
         | Isn't B2 from Backblaze already filling that need? I means more
         | choice is always better for sure, but considering R2 goal seems
         | really to be a CDN more than a backup space and it does feel
         | like their money maker is in the CDN part, not the storage
         | part... I feel like trusting them to store it long-term without
         | using the CDN part is a little bit risky.
        
           | znpy wrote:
           | Frankly, backblaze looks like an over-specialized player
           | whereas clouflare is already used for a lot of stuff.
           | 
           | Eg: my employer already has stuff on clouflare, using their
           | services is just as easy as pulling their terraform provider.
           | OTOH, for backblaze, I'd have to go through the whole
           | evaluation process, security and legal compliance etc etc...
        
           | loxias wrote:
           | B2 charges for egress, $0.01/GB. More interesting to me is
           | Wasabi, which charges ~$0.006/GB*month and no egress fees at
           | all.
        
             | mwambua wrote:
             | B2 doesn't charge for egress to Cloudflare.
             | (https://www.cloudflare.com/bandwidth-alliance/)
        
               | loxias wrote:
               | (stupid question) What does that get me? Does Cloudflare
               | have cheap VMs I can rent with competitive prices?
               | (including data egress fees?)
               | 
               | For object storage that I play with, some clients are in
               | the cloud, with most sitting at home behind residential
               | internet.
        
               | jauer wrote:
               | (approximately) Cloudflare provides a proxy service that
               | you'd use to access your B2 data from home or other cloud
               | without paying for egress.
               | 
               | They can do this because it costs almost nothing to move
               | data between B2 and Cloudflare, and then from Cloudflare
               | to almost anywhere. Moving data from B2 to most other
               | places on the internet likely costs them more because
               | Backblaze isn't in a position to negotiate adventagous
               | peering agreements with ISPs.
        
               | loxias wrote:
               | Wow! Cool! Very surprised that Cloudflare wouldn't charge
               | an arm and a leg for such a service... considering
               | they're moving the actual bits.
               | 
               | I'm poking around at the Cloudflare website, what's the
               | name of the aforementioned service? What term should I
               | google?
               | 
               | I'm ignorant of "modern Cloudflare" -- other than reading
               | their _fantastic_ technical blog, I 've never used them
               | in a professional capacity and don't know their product
               | offering -- other than a cache, CDN, DDOS protection, and
               | a Lambda.
        
             | anderspitman wrote:
             | Wasabi egress is free, but they "reserve the right to limit
             | or suspend your service"[0] if it doesn't fit the usage
             | patterns they've designed for.
             | 
             | [0]: https://wasabi.com/paygo-pricing-faq/#free-egress-
             | policy
        
               | loxias wrote:
               | Huh, thanks for that. I hadn't noticed (was that always
               | there??). So, sustained egress at rate R/sec means I have
               | to use 2500000 * R amount of storage per month, hrm...
               | 
               | Possibly can't use it for one of the "ponies" I was
               | working on, but probably still good as "ye huge media
               | archive".
        
             | 35mm wrote:
             | Wasabi doesn't charge for egress ('fair use' policy
             | applies), but they do have a 3 month minimum for data,
             | including deleted data.
             | 
             | This caught me out when I was transferring 100GB files that
             | only needed to be up for a few hours, and I ended up
             | getting charged as if I had hosted them for 3 months.
        
         | anderspitman wrote:
         | > 'rclone'[2][3] which allows you to move data from cloud to
         | cloud without involving your own bandwidth
         | 
         | Maybe I'm reading this wrong, but the data does pass through
         | the machine where rclone is running. rclone does support
         | remote-to-remote transfers[0], but I believe only for remotes
         | of the same type (ie S3 to S3).
         | 
         | [0]: https://rclone.org/docs/#server-side-copy
        
           | stavros wrote:
           | Does this mean "remotes that speak the S3 protocol", or
           | "remotes that _are_ S3 "? The former would require S3
           | supporting induced requests (so it POSTed the data
           | somewhere), the latter would require a "copy" operation on
           | S3. I don't know which one is supported.
        
         | slig wrote:
         | The [3] resource is fantastic! Have you tried sponsoring
         | rclone? I was studying their docs last week, and I'm sure
         | people reading the docs are interested in this use case of
         | moving between clouds without using their own bandwidth.
        
           | rsync wrote:
           | Thanks - I made a point to have that document created _not
           | specifically_ with rsync.net use-case in mind.
           | 
           | You can follow that howto with any two cloud endpoints if you
           | wanted to.
        
       | abhv wrote:
       | Very generous offer from Cloudflare. I signed up.
       | 
       | The main question: how can Cloudflare make this into a
       | sustainable business?
       | 
       | * cost/gb is cheaper or same as s3, gcp, azure
       | 
       | * no egress charges to customers, but they still have to pay for
       | transit when they cross an AS!
       | 
       | what is the hidden angle Cloudfare is using here?
        
         | wongarsu wrote:
         | Cost/GB is three times that of Backblaze, bandwith is generally
         | too cheap to meter (outside aws, gcp, azure who use egress
         | costs for lock-in).
         | 
         | The offer is no doubt competitively prized, but it's no doubt
         | much more lucrative than people using Cloudflare as a CDN for
         | their s3/b2/azure/whatever.
        
           | drip-egg-jack wrote:
           | You need to account IO load as well, more bandwidth means
           | more requests to the disks and I assume they will use
           | mechanical HDDs for the storage, SSDs are still too expensive
           | to support $0.015/GB. Good mechanical drive can give you up
           | to 200 IOPS and maybe 200MB/s, so they will have to copy the
           | data a lot to be able to increase capacity on demand. Making
           | it free doesn't seems to be sustainable at all.
        
         | ithkuil wrote:
         | > no egress charges to customers, but they still have to pay
         | for transit when they cross an AS!
         | 
         | Can somebody TL;DR what does being member of the "bandwidth
         | alliance" mean?
        
           | NicoJuicy wrote:
           | > what does being member of the "bandwidth alliance" mean?
           | 
           | Cloud providers normally have huge profits on egress data (
           | outgoing) and not incoming data ( to attract giving them your
           | data).
           | 
           | This additionally incentives customers to stay within a
           | certain cloud and not using competing products based on costs
           | ( since egress costs to a 3rd party makes the price
           | difference obsolete)
           | 
           | Cloudflare ( i think) started the bandwidth alliance to make
           | these prices more fair. As in: work together, have direct
           | network connections and reduce customer billing.
           | 
           | AWS didn't want to join as they are ndeg1 currently:
           | https://blog.cloudflare.com/aws-egregious-egress/
        
         | manigandham wrote:
         | Cloudflare is already providing free egress to object storage
         | hosted elsewhere (like B2). Now they can handle the storage too
         | and charge you for it.
         | 
         | It's a net gain, with a storage price higher than B2 or Wasabi,
         | and also moves your data to their platform for more ecosystem
         | lock-in.
        
         | NicoJuicy wrote:
         | I don't know/think there is a hidden angle.
         | 
         | Eg. What could be the hidden angle of providing domains at cost
         | ( which they do).
         | 
         | I think it's a matter of having a more complete cloud suite and
         | expanding their portfolio while gaining market share.
         | 
         | Clarification: I don't think they do this at cost. Just cheaper
         | than the big 3.
        
           | abhv wrote:
           | I suspect the story is a bit more complicated.
           | 
           | They seem to learn a lot by centralizing a lot of the
           | internet's traffic through their core. Perhaps it makes their
           | ddos-protection more robust when they can train on "what
           | normal usage" looks like "for the longer tail".
           | 
           | Perhaps they are building moats around that business and
           | expecting a future when Fortune1000 requires their cdn.
        
             | NicoJuicy wrote:
             | I don't think that's correct.
             | 
             | Their CDN is based on their SDN. Which is also at the core
             | of all their products, including DDOS protection.
             | 
             | Their primary product is the SDN and a CDN was a logical
             | pick as first product "release".
             | 
             | Eg. You shouldn't consider their products in "regions" as
             | traditional cloud providers. In cloudflare it's
             | programmable, not configurable.
        
         | irq-1 wrote:
         | > they still have to pay for transit when they cross an AS!
         | 
         | That's their advantage -- a global network that only exits to
         | the local AS.
        
       | sneak wrote:
       | Feature request: automatic ipfs integration.
        
         | [deleted]
        
       | dhess wrote:
       | We have a fairly large Nix binary cache in S3, with not
       | insignificant egress fees, and I'd love to try this out as a
       | replacement.
        
       | Youden wrote:
       | Interesting pricing considering Backblaze is another Bandwidth
       | Alliance member and they only charge $0.005/GBmonth (vs.
       | $0.015/GBmonth). B2 + CloudFlare gives you a similar deal at a
       | third the cost.
        
         | geostyx wrote:
         | I'm excited because while B2 + Cloudflare is great, the
         | speed+latency isn't the greatest for some applications. So
         | there's definitely a place for R2 here to compete more with AWS
         | S3 than B2.
        
           | heipei wrote:
           | I'm a fan of B2 as well, but for some use-cases they
           | seriously need to up their game. They only have three
           | datacenters (CA, AZ, and Amsterdam), they still don't have a
           | public status page, their admin UI is lacking lots of
           | features (like proper invoices and different 2FA options),
           | their permission system is very basic and inflexible, and
           | they are not integrating compute with storage like AWS
           | already does and Cloudflare will eventually be able to.
           | However they are impossible to beat on cost, and for me their
           | latency has recently improved significantly and has become
           | much more stable, so I'm not going to move anytime soon.
        
           | jonplackett wrote:
           | What kind of speed difference are you seeing with B2 vs S3?
        
             | 422long wrote:
             | Latency (time to first byte) in serving infrequently
             | accessed images was a big problem with me and B2. The cost
             | was low enough that I've stuck with it though and coded on
             | the front end of the site to use placeholder images until
             | the real media can be retrieved.
        
         | Strom wrote:
         | Yes but you can only use B2 via CloudFlare for web pages. Using
         | it as a data storage platform isn't allowed. Unless of course
         | you're willing to pay handsomely via an enterprise contract,
         | but then the pricing changes.
         | 
         |  _Use of the Services for serving video or a disproportionate
         | percentage of pictures, audio files, or other non-HTML content
         | is prohibited, unless purchased separately as part of a Paid
         | Service or expressly allowed under our Supplemental Terms for a
         | specific Service._ [1]
         | 
         | --
         | 
         | [1] 2.8 Limitation on Serving Non-HTML Content
         | https://www.cloudflare.com/terms/
        
           | eastdakota wrote:
           | That limitation doesn't apply to the R2 service or Workers
           | generally. We'll update and clarify our ToS. Thanks for
           | flagging!
        
             | ridruejo wrote:
             | Continue to be impressed to have the CEO replying to
             | comments on HN, as with Stripe ... that's a good signal :)
        
             | DoctorOW wrote:
             | But to clarify it _does_ apply to B2?
        
               | 0xbkt wrote:
               | _You may use Cloudflare Pages and Workers (whether in
               | conjunction with a storage offering such as Cloudflare
               | Workers KV and Durable Objects or not) to serve HTML
               | content as well as non-HTML content (e.g., image files,
               | audio files) other than video files._
               | 
               | See: https://www.cloudflare.com/supplemental-terms/ SS
               | Cloudflare Pages and Cloudflare Workers(r)
               | 
               | The said limitation should apply however to their
               | tranditional service with orange on whether it is B2 or
               | not. I am not sure if being a Bandwidth Alliance partner
               | makes a difference.
        
               | DoctorOW wrote:
               | So the gray area comes from an exception being granted
               | from R2 not specified in that linked page. R2, like B2 is
               | part of Cloudflare's bandwidth alliance, so is the
               | unwritten exception for R2 or for the bandwidth alliance?
        
             | [deleted]
        
         | adventured wrote:
         | Very much depends on what you're utilizing Backblaze for. It's
         | unusable for volume image hosting for example. It has severe
         | performance problems with small files, as does much of the
         | object storage field (including DigitalOcean, they directly
         | warn customers not to bother using their object storage for
         | small files if performance is a concern). The CDN doesn't help
         | much, Backblaze chokes if you try to shovel a large number of
         | small files at it (their infrastructure was not designed for
         | that and they admit it in their docs), and or attempt to pull a
         | lot of small files out of it. AWS is pretty great when it comes
         | to that by contrast, and with the price to go with it. I'm
         | looking forward to finding out what R2 can do.
        
           | drip-egg-jack wrote:
           | This is true - most of the object storage providers are
           | really bad at serving data fast, they more focused on a cold
           | storage. Tebi.io works really fast for small files, plus it
           | is a geo-distributed object storage meaning that data is
           | physically stored in different locations around the world,
           | greatly reducing latency.
           | 
           | Backblaze, DO Spaces simply were not designed for this in the
           | first place.
        
         | gnfargbl wrote:
         | I recently started keeping about 30TB of ElasticSearch and
         | Postgres backups in Backblaze B2. The price is great, but
         | getting data in is not particularly easy as the Backblaze S3
         | API seems to fail a high proportion of requests when under
         | load.
         | 
         | If R2 can be approximately as reliable on ingest as
         | AWS/GCS/Azure is, but without the egress fees of the other
         | major providers, then $0.015/GB-month seems like a pretty good
         | deal.
        
           | Spivak wrote:
           | B2 really is an exercise in making sure your code is robust
           | with respect to external APIs but I'll be damned if it isn't
           | cheap. We ended up building a whole queueing system for it
           | because ad-hoc retry logic stopped being good enough.
        
           | christophilus wrote:
           | I've had a great experience with Wasabi + BunnyCDN. Might be
           | worth kicking the tires, as it's a really inexpensive combo.
        
       | thecodrr wrote:
       | Not sure why this is more appealing than Wasabi? As far as I can
       | see, Wasabi is cheaper, has great speeds, fantastic S3
       | compatibility, their dashboard is a joy to use so what is the
       | actual "special" thing here? I mean sure, it's a good thing to
       | have more competition but the way everyone here is describing the
       | situation makes it seem as if Cloudflare is going to be the
       | cheapest & the best.
        
         | sliken wrote:
         | I believe the minimum object retention time is 3 months, which
         | killed it for our use case. I.e. create a file today, delete it
         | today, and pay for it for 3 months.
        
         | macromaniac wrote:
         | https://wasabi.com/rcs-pricing-faq/
         | 
         | >If your monthly egress data transfer is greater than your
         | active storage volume, then your storage use case is not a good
         | fit for Wasabi's free egress policy
         | 
         | >If your use case exceeds the guidelines of our free egress
         | policy on a regular basis, we reserve the right to limit or
         | suspend your service
         | 
         | Unless I am misreading, wasabi can shut you down if your egress
         | is high.
        
         | NicoJuicy wrote:
         | 1 - Their portfolio is bigger. Making it easier for existing
         | clients to adopt. Eg. Why a lot of companies use AWS.
         | 
         | This product seems a good fit for their cloudflare workers.
         | 
         | 2 - Cloudflare is going heavy on egress costs. Which they have
         | a lot of extra opportunity costs, comparible to the big 3 cloud
         | providers and they are putting their "weight" in good use:
         | 
         | https://blog.cloudflare.com/bandwidth-alliance-charges-forwa...
         | 
         | Egress/ingress costs is also important for pricing fyi.
         | 
         | 3 - Cloudflare has a known good reputation for their services (
         | eg. Uptime and costs) + an active community. ( Eg. Check their
         | blog )
         | 
         | Tbh. I didn't knew Wasabi, will check them out though.
         | 
         | 4 - Their datacenters are at the edge. Hard to compete on that
         | physical advantage in terms of speed.
         | 
         | Ps. Yes, I could be considered a fan, similar to AMD in the
         | past ( still do fyi).
        
         | jjeaff wrote:
         | My understanding with Wasabi is that it is only intended for up
         | to 100% egress. It doesn't look like r2 has this limitation.
         | Though I can't imagine egress is actually completely free
         | beyond a certain point.
        
         | ukd1 wrote:
         | Wasabi is actually pretty great. +1 on this.
        
       | the8472 wrote:
       | _> R2 is designed with redundancy across a large number of
       | regions for reliability. We plan on starting from automatic
       | global distribution and adding back region-specific controls for
       | when data has to be stored locally, as described above._
       | 
       | Does that mean automatic caching across regions? Low-latency read
       | access everywhere without an extra CDN in front of it?
        
         | greg-m wrote:
         | Hey, PM for R2 here.
         | 
         | We're still deciding on how we want to handle caching. We
         | integrate with Workers, so manually caching is always possible.
         | The catch is we're currently building for strong consistency -
         | if we added a cache in front, we'd weaken that - so it will
         | likely be a configuration option we add later.
        
           | js4ever wrote:
           | I'm more interested in eventually consistent with low
           | latency. Please add this setting ASAP :)
        
       | jiripospisil wrote:
       | > Our object storage will be extremely inexpensive for infrequent
       | access and yet capable of and cheaper than major incumbent
       | providers at scale.
       | 
       | How frequent is infrequent? In our case it's "never unless other
       | backups fail" and for that S3 Glacier Deep Archive is still
       | cheaper ($0.00099 per GB).
        
         | thangngoc89 wrote:
         | > currently planned to be in the single digit requests per
         | second range
         | 
         | This is a little bit above your quoted paragraph
        
         | Wallacy wrote:
         | "At launch, R2 will cost $0.015 per gigabyte of data stored on
         | the service, about half as much as AWS charges for customers on
         | its frequent access tier. Infrequent access to R2 will be free,
         | the company said. "
         | 
         | https://www.protocol.com/enterprise/cloudflare-r2-storage-aw...
         | 
         | So.... I still need to test, but if that is true can be a game
         | changer. Of course... $0.00099 is almost free, but if
         | everything is done automagically will be awesome.
        
       | jlelse wrote:
       | Curious to see how this compares to Bunny.net Edge Storage (they
       | are working on S3 support as well): https://bunny.net/edge-
       | storage/
        
       | wiradikusuma wrote:
       | For those who don't want to wait, there's DigitalOcean Spaces
       | (https://www.digitalocean.com/products/spaces/).
       | 
       | Disclaimer: I haven't used it, but planning to, since I already
       | use their VPS.
        
         | slig wrote:
         | Scaleway also provides a similar service. Had an issue recently
         | (deleted a bucket, and couldn't create another bucket with the
         | same name) and their support replied back _in minutes_.
        
           | js4ever wrote:
           | Scaleway S3 was slow as hell for small files last time I
           | tried, is it better now?
        
             | slig wrote:
             | I had no issues, maybe they're better now? People said the
             | same about DO and I haven't had any issues with them as
             | well.
        
         | zeitg3ist wrote:
         | Spaces is not good, in my experience. Their Frankfurt storage
         | just stopped working for like a week. And last time I checked
         | they still didn't support critical features like per-bucket
         | access keys or index.html support.
        
         | frakkingcylons wrote:
         | Not that Spaces isn't a good product, but there are countless
         | S3 compatible object storage services.
         | 
         | This announcement is noteworthy because there's _no egress
         | fees_. Spaces still charges 1c per gigabyte for egress. Not
         | comparable in that aspect.
        
         | numair wrote:
         | Love the team at DigitalOcean but Spaces was NOT reliable the
         | last time I played with it. It's also lacking key features in
         | lifecycle and notification areas. Maybe they've gotten things
         | together in recent months/years and I haven't gotten the
         | update, but it was very "you get what you pay for" when I tried
         | to stage a project using it in 2018/19.
        
         | BackBlast wrote:
         | I tried spaces about 2 years ago. I had serious performance
         | issues with it. I abandoned it.
        
       | hakube wrote:
       | Is Cloudflare working it's way to be a cloud service competitor
       | to AWS or other services out there?
        
       ___________________________________________________________________
       (page generated 2021-09-28 23:01 UTC)