[HN Gopher] Ransomware-resistant backups with duplicity and AWS S3
___________________________________________________________________
Ransomware-resistant backups with duplicity and AWS S3
Author : alanfranz
Score : 44 points
Date : 2022-01-27 19:57 UTC (3 hours ago)
(HTM) web link (www.franzoni.eu)
(TXT) w3m dump (www.franzoni.eu)
| czl wrote:
| Having point in time backups is a good start but I can see
| ransom-ware adopt to slowly corrupt your data in a way that is
| reversible but may take months to detect. Your backups going back
| months will then have this corruption. To detect this application
| level tripwires may be needed like checksums etc. Finally there
| is always reputation damage and threats to expose the attack an
| your data to the public via blackmail. Just because you have
| backups does not make you safe.
| gjs278 wrote:
| EvanAnderson wrote:
| Application "tripwires" are just another obstacle for an
| attacker to overcome. If audits aren't external to the system
| being audited they're just as vulnerable to manipulation.
|
| A Customer of mine in the financial sector sent their backups
| to a third-party for independent verification quarterly. The
| third-party restored data into a clean system reconciled
| against the production system.
|
| That would be the kind of auditing that would be more apt to
| detect the "low and slow" attack.
| rmbyrro wrote:
| > _you 'll need to make sure that your master access to AWS S3 is
| never compromised_
|
| Your master access to S3 should never go into your servers.
| Create an IAM access with authorization to _only_ PUT objects
| into S3.
|
| > _For the purpose we have, Governance mode is OK_
|
| Maybe not (?), since Governance mode allows for deletion of
| previous versions. One careless mistake handling your access
| key/secret and you're exposed to bye bye backups.
|
| End note: this is still not enough. An attacker could compromise
| your backup script and wait for 40 days before locking yourself
| out of your data. When you try to recover a backup, you'll notice
| you have none.
|
| Perhaps most attackers won't have the patience and will just
| forget about you, but who knows?
|
| A naive protection from that would be to store at least one
| version of the backups forever. But we're still not covered,
| since the attacker could compromise that particular version and
| boom.
|
| I can't think of a comprehensive, fully closed-loop solution
| right now...
| rsync wrote:
| "I can't think of a comprehensive, fully closed-loop solution
| right now..."
|
| The ZFS snapshots that you may configure on an rsync.net
| account are immutable.
|
| There are no credentials that can be presented to destroy them
| or alter their rotation other than our account manager whose
| changes _always pass by a set of human eyes_. Which is to say,
| no 'zfs' commands are ever automated.
|
| So the short answer is you simply point borg backup to
| rsync.net and configure some days/weeks/months of snapshots.
|
| The long answer - if you're interested:
|
| https://twitter.com/rsyncnet/status/1470669611200770048
|
| ... skip to 52:20 for "how to destroy an rsync.net account":
|
| "... Another thing that lights up big and red on our screen is
| ... someone's got a big schedule of snapshots ... and then they
| change it to zero ... you've got seven days and four weeks and
| six months but we want to change that to zero days of
| snapshots. We see those things ... and we reach out to people."
| selykg wrote:
| Re: HN rsync.net discounts. Is that basically the Borg
| specific product? Or is there some discount on the normal
| rsync.net service? The Borg product misses one important
| thing for me and that is sub accounts. But the price
| difference between them is pretty large. I don't need the
| hand holding service, I already use the Borg service you
| provide but definitely would prefer having the sub accounts
| and possibly the ZFS snapshots might be useful.
| rsync wrote:
| email info@rsync.net and someone (possibly me) will get you
| squared away ...
| antaviana wrote:
| One good alternative is to upload to an S3 bucket with object
| lock enabled. This way you can store immutable objects.
|
| You can make them immutable for everyone if you wish and the only
| way to delete them is to close the AWS account.
|
| I cannot think of a safest place for a backup than a bucket with
| object lock.
| momothereal wrote:
| Can you still configure object retention when they are locked?
| i.e. automatically delete objects after X days.
| WatchDog wrote:
| The article is about object lock
| mlac wrote:
| One good alternative to reading the articles is just skimming
| the comments.
| hughrr wrote:
| I would never use this after being burned badly. Duplicity hits a
| scalability brick wall on large file volumes which consumes
| ridiculous amounts of CPU and RAM on the host machine and leads
| to irrecoverable failure where it can't backup or restore
| anything. Fortunately I caught this before we had a DR scenario.
|
| I am using rdiff-backup over SSH to replace it now. This has been
| reliable so far but recovery times are extensive.
| czl wrote:
| Assuming zfs is reasonable for the usecase: incremental zfs
| snapshots are likely efficient since these are byte level vs
| file level.
| hughrr wrote:
| Depends on the recovery cost but yes I agree they are
| probably a better solution.
| rsync wrote:
| "I would never use this after being burned badly. Duplicity
| hits a scalability brick wall on large file volumes which
| consumes ridiculous amounts of CPU and RAM on the host machine
| and leads to irrecoverable failure where it can't backup or
| restore anything."
|
| I believe you are correct and I believe that in my private
| correspondence with the duplicity maintainer (we sometimes
| sponsor duplicity development[1]) he sort of conceded that borg
| backup[2] is a better solution.
|
| If the cloud platform you point your borg backups to can
| configure immutable snapshots (that is, _they_ create, rotate,
| and destroy them) then a good solution would be using borg
| backup over SSH and configuring some of those snapshots[3].
|
| [1] https://www.rsync.net/resources/notices/2007cb.html
|
| [2] https://www.stavros.io/posts/holy-grail-backups/
|
| [3] https://twitter.com/rsyncnet/status/1453044746213990405
| nijave wrote:
| Seems easier to just do incremental snapshots unless you're on
| bare metal. Many hypervisors support them and they're built into
| EC2/EBS
|
| If you want to limit data, you can create additional drives and
| mount them at the appropriate location (or change your
| application config to save to the auxiliary drives)
| andrewguenther wrote:
| At a minimum, you should include enabling MFA for the IAM user.
| Generally, I'd suggest against using IAM users entirely. Ideally
| you would use an IAM Role via federation or SSO. For my personal
| accounts I use AWS SSO even though I'm just one person since it
| enables me to do all my work through role-based authentication
| and is still protected by MFA on top.
| [deleted]
| ignoramous wrote:
| Incredible that folks have to jump through so many hoops.
|
| At this point, AWS should offer versioned S3 backups accessible
| only offline or through customer support, enabled with a click of
| a button.
| justin_oaks wrote:
| AWS S3 object versioning makes it pretty easy to allow a server
| to add backup data without the ability to permanently modify or
| delete data that existed previously.
|
| For my backups I use restic and sync the restic repository data
| to S3. Even if the source data is corrupted, I can always roll
| back to the set of S3 object versions from a particular time.
|
| The downside to using S3 object versioning is that I haven't
| found any good tools to work with object versions across multiple
| objects.
|
| For example, I need to prune old backups from my restic
| repository. To do that I have to delete object versions that are
| no longer current (i.e. the latest version of an object). To
| accomplish this I had to write a script that uses Boto3 (AWS SDK
| for Python) that lets me delete non-current object versions and
| deletes any delete-marker versions.
|
| The code was pretty straightforward, but I wish there was a tool
| that made it easier to do that.
| dageshi wrote:
| Not 100% sure but can't you setup rules for that stuff in s3
| itself? Like delete everything older than x date but never
| delete current version?
|
| I have a feeling I set something like this up but it's been a
| while since I did it.
| nijave wrote:
| Yeah you can use lifecycle policies
|
| https://docs.aws.amazon.com/AmazonS3/latest/userguide/object.
| ..
| andrewguenther wrote:
| You are correct, S3 lifecycle policies support deleting non-
| current object versions once they're X days old as well as X
| versions old: https://docs.aws.amazon.com/AmazonS3/latest/use
| rguide/lifecy...
___________________________________________________________________
(page generated 2022-01-27 23:00 UTC)