[HN Gopher] The 3-2-1 Backup Rule - Why Your Data Will Always Su...
       ___________________________________________________________________
        
       The 3-2-1 Backup Rule - Why Your Data Will Always Survive (2019)
        
       Author : Apocryphon
       Score  : 70 points
       Date   : 2021-11-24 18:28 UTC (4 hours ago)
        
 (HTM) web link (www.vmwareblog.org)
 (TXT) w3m dump (www.vmwareblog.org)
        
       | mike_d wrote:
       | People often forget the 3-2-1 rule should also apply to the
       | secrets you use to encrypt/store your backups. If you can't
       | decrypt it, it isn't a backup.
       | 
       | I use a crazy long passphrase to encrypt my backups, but should I
       | forget - it is also printed on archival paper inside a sealed
       | envelope in a friends safe deposit box (I also have a copy of his
       | backup passphrase for mutually assured destruction :)).
       | 
       | Also, every once in a while run a fire drill and actually restore
       | something from each of your backups. This is when you find out
       | the rsync job has been stuck for the last 80 days.
        
         | philips wrote:
         | Yeah the fire drill to make sure everything works is crazy
         | important.
         | 
         | I just wish that storing things like keys on paper was easier.
         | 
         | At CoreOS we put some keys on printed QR codes and scanned them
         | with an airgapped laptop every 90 days to confirm the keys were
         | safe.
        
           | denton-scratch wrote:
           | > Yeah the fire drill
           | 
           | I agree. But I've never been allowed to run a fire drill.
           | Rebuilding a network from bare tin is obviously expensive,
           | but not as expensive as losing the business.
           | 
           | And then there's the sheer stress of being responsible for
           | the backups, but not being able to test bare-metal recovery.
           | 
           | I'm interested in backup ("what kind of weirdo is this!"),
           | but that wasn't a fun responsibility.
        
         | sigotirandolas wrote:
         | I prefer the "1 rule" for some data... the password is only on
         | my mind, so when I die, it's hopefully gone forever :)
        
           | ghaff wrote:
           | While I have some passwords that I don't _think_ I'd ever
           | forget not sure I'd want to bet on it given that I'm
           | remembering more than one password.
        
         | xtracto wrote:
         | You could use Samir Secret Sharing to split the secrets in M
         | unrelated strings and then "recover" it with M-k strings. (E.g.
         | Generate 5 keys and require 2 to recover). That way you can
         | share some of those keys but nobody will have the target
         | password.
        
           | mike_d wrote:
           | If I am in a situation where my brain doesn't work well
           | enough to remember a backup passphrase, it isn't going to
           | work well enough to do silly hacker shit.
           | 
           | There are very clear directions to my friend for when it is
           | acceptable to access my stuff, and very clear consequences
           | for malice.
        
         | 123pie123 wrote:
         | A full test is absolutely needed - including any system or DB
         | restores and the system needs to be fully functionaly tested
         | 
         | A quick backup verification is not enough - I learnt this the
         | hard way and lost 9 months of data only after I did full annual
         | DR test. The backups were setup and configured by the (very
         | well known) manufacturer of the backup software. They screwed
         | up on the DNS name
        
       | throw7 wrote:
       | I think actually physically going through your disaster recovery
       | plan (whatever that would be, 321 or not) reveals how good/bad
       | your backup plan is for you.
       | 
       | Personally, I downsized. I made peace with myself that I can live
       | without the gobsmack amount of data I have if it were to be lost
       | tomorrow. I pared down to a small set of data (less than 1GB)
       | that is critical to me. That data is synced to various devices
       | with syncthing (includes even my cellphone!), and then I use
       | restic to two different cloud storage providers. When I'm bored I
       | do an independent, standalone export from cloud storage.
        
         | dharmab wrote:
         | I did the same with two tiers- a first tier of the data I would
         | need for a true disaster (a few GB), and a second tier of data
         | that is meaningful and difficult to replace (small enough to
         | fit on a flash drive). Everything else is dust in the wind.
        
         | slownews45 wrote:
         | Same here.
         | 
         | Since I "restore" this backup pretty frequently just for day to
         | day living (ie, doing taxes) I'm also pretty sure it's
         | accessible.
         | 
         | I do pay for versioning for the online sync of this, and I do a
         | period S3 object lock copy (30 days). For me, that's good
         | enough.
        
       | FullyFunctional wrote:
       | The different media types is probably the hardest issue; I'd
       | wager that for most people, backing up > 2 TB on anything but
       | spinning rust will be impractical/prohibitively expensive.
       | 
       | A slight twist of this: I now have data old enough that accessing
       | it with modern computers is starting to become a challenge.
       | Thankfully I migrated all my, once enormous collection of 50,000
       | MB on tape, to just be files on my file server, but I'm worried
       | about the longevity of optical media, and now I nervously glance
       | at my collection of even older media ....
        
         | xtracto wrote:
         | And the most important part of the rule which they forgot: Test
         | your backups.
         | 
         | I once has to restore a DB from backups just to discover that
         | the backups from 2 days ago were corrupted. Fortunately backup
         | t-3 days worked. And I had to do some binlog mongering to
         | recover the rest of the data.
         | 
         | Test your backups people. They WILL fail.
        
           | ws66 wrote:
           | From someone smarter than me: people think they need backups
           | while in fact they need restores...
        
             | I_complete_me wrote:
             | Beautifully succinct.
        
         | Someone1234 wrote:
         | > I'd wager that for most people, backing up > 2 TB on anything
         | but spinning rust will be impractical/prohibitively expensive.
         | 
         | You'd lose that bet. 2 TB on LTO Ultrium tape costs under $10
         | (sometimes under $5 depending on volume of tapes ordered).
        
           | Karunamon wrote:
           | Lots of people here focusing on the cost of the drive without
           | factoring in:
           | 
           | - For home use, it is likely a once-or-twice-in-a-lifetime
           | purchase (try saying that about any other kind of media)
           | 
           | - You don't have to buy a brand new drive of the latest
           | generation at sticker price; used last gen(s) gear from the
           | enterprise works just as well.
           | 
           | - Perhaps most importantly, _what 's your data worth?_ I
           | don't know about you guys, but I've got photos and documents
           | that are irreplaceable.
           | 
           | Tapes beat all other storage media on $/GB, reliability over
           | time, and arguably durability, all of which are the most
           | important factors for offline backup.
           | 
           | $500 comes out in the wash over a decade or two. That's the
           | kind of time scales we're talking about. Yes, it's not cheap
           | and easy consumer electronics you can buy off the shelf at
           | Walmart, but it's not unreasonable either.
        
           | Denvercoder9 wrote:
           | And how much for the tape machine?
        
             | Someone1234 wrote:
             | The person above was talking about the cost of HDDs, they
             | didn't include the cost of a machine or NAS to run them in
             | so why should I?
             | 
             | Seems like a double standard wherein people are going to
             | _pretend_ housing multiple HDDs costs $0 (unrealistic) but
             | won 't even evaluate spending money on a tape drive.
             | 
             | You what doesn't get cryptolockered? Yesterday's tape
             | sitting on the shelf.
        
               | Denvercoder9 wrote:
               | Because we're talking about most people, and most people
               | already have a machine into which they can plug a HDD.
               | Often that's the very machine with the documents they
               | want to backup.
        
               | wongarsu wrote:
               | I think "plugs into SATA or USB" is a reasonable standard
               | for a backup storage medium.
        
               | Someone1234 wrote:
               | Whereas I think that isn't a backup at all, because it is
               | plugged into the same machine (i.e. online backup) and
               | therefore going to get crypto-lockered.
        
               | Denvercoder9 wrote:
               | Neither SATA nor USB implies permanence.
        
           | FullyFunctional wrote:
           | And how much is the tape drive? For _most_ people (not data
           | centers) that prohibitively expensive. A 2 TB harddisk is $48
           | from newegg (and I'm sure you can find cheaper).
        
           | wizzwizz4 wrote:
           | Most people do not have a $1000 LTO tape drive.
        
           | wongarsu wrote:
           | With the slight downside that you won't be able to acquire a
           | drive for less than $800.
           | 
           | LTO's combination of cheap media and expensive drives is
           | great for people with rooms full of tapes, but it makes it
           | pretty unattractive for everything else.
        
         | FullyFunctional wrote:
         | Oh oh, I forget to add that I would _never_ trust flash memory
         | (eg. SSDs) to keep data for more than ~ a year without being
         | powered on. It's absolutely a terrible archival format.
         | (Powered on with scrubbing you'd at least know when it's
         | starting to degrade).
        
         | ComputerGuru wrote:
         | There are Blu-ray Discs specifically engineered for long-term
         | archiving and have BER guarantees (anomalous bit read per gb of
         | data stored per year archived or something).
        
       | zinekeller wrote:
       | Oh bother, it's down :( _(edit: it might no longer)_
       | 
       | Archive links:
       | 
       | https://web.archive.org/web/20211001064106/https://www.vmwar...
       | 
       | https://archive.md/1jHmP
        
         | throwaway2331 wrote:
         | TL;DR 3-2-1:
         | 
         | 3 backups
         | 
         | 2 different media types
         | 
         | 1 off-site
        
           | tacone wrote:
           | Looks like they have backups :)
        
       | H8crilA wrote:
       | What's the cloud solution (at major providers) that automatically
       | geographically replicates data? For example, S3 buckets are tied
       | to a region, which cannot be considered reliable - a single DC
       | can always burn down, or even just intermittently unreachable.
       | I'm looking for something that accepts an "upload" command that
       | will (eventually) replicate the data to several regions. Ideally
       | the regions can be changed at any point later, too.
       | 
       | This would take care of 3 and 1, sadly not 2 but 2 is pretty
       | hard.
        
         | ComputerGuru wrote:
         | S3 offers multi-region buckets that automatically read/write
         | from one of n buckets in n regions. Cross-region replication
         | rules give you eventual consistency (without conflict
         | resolution) across all buckets, which is good enough for backup
         | systems with unique file names/paths.
        
         | gnur wrote:
         | S3 is region based, not AZ based. So it should be able to
         | tolerate an entire AZ (=DC) being lost.
         | 
         | They offer an amazing number of 9s for durability, it is very
         | unlikely that a single fire will cause harm to your data.
        
           | H8crilA wrote:
           | Yeah, OK. But apart from the reliability question - what's a
           | one stop shop for uploading files to Europe and the US in one
           | go?
        
           | Heliosmaster wrote:
           | I can't find the article anymore, but a few weeks ago here on
           | HN there was a piece where somebody did a few calculations on
           | how AWS US-East-1 (the most used AWS region in the world)
           | works, and got to the conclusion that all the things that can
           | make that region go BOOM are likely to have major
           | consequences to our civilization.
           | 
           | In short: AWS will go down only when you won't care about it
           | anyway
        
       | jimmySixDOF wrote:
       | Funny Story.
       | 
       | My first service order way back when was to a engineering office
       | who were in a panic because their primary CAD file server had
       | failed right in the middle of a deadline submittal.
       | 
       | I got there when they had just pulled out an identical file
       | server from some other department. They had 'everything' backed
       | up nightly on to a Zip drive. Nice, I thought. So, I plug in the
       | Zip drive and can see all these image files created by their
       | backup software. I ask for the backup app install files and they
       | say its kept in a directory -- _on the failed server_ !!!
       | 
       | I should mention this was not in the USA and resulted in a 6+hr
       | long international 42kbps zmodem download session from some
       | random BBS server of the best guess software product and version.
       | 
       | I still have one of the failed HDDs from that server as a paper
       | weight on my desk.
       | 
       | PS. We got it working in time and the obvious moral of the story
       | is to always test your backup systems (and that goes for _both_
       | failover _and fallback_ )
        
       | oblib wrote:
       | I've added offline/local-first to my web app so users to keep a
       | copy and backups of all of their data on site and use the app
       | offline but none of them want to do anything at all to implement
       | it, and I've made very easy.
       | 
       | They really don't seem to give it any thought at all, even when I
       | explain to them why they need to do this.
       | 
       | Conversely, when they're having network connection issues they
       | don't hesitate to call me, and sometimes in a panic.
       | 
       | I'm going to keep pushing them to get on it on their end though.
        
       | denton-scratch wrote:
       | As far as I can see, this scheme gives you one backup copied to
       | three places, on two different media.
       | 
       | What if the source for the backup was already corrupt or broken
       | in some way? If you only have one backup, then your backup is
       | corrupt too.
       | 
       | I was taught grandfather-father-son back in the 80s; still three
       | levels of backup, but they're different generations. That fitted
       | the kinds of media available then, but it doesn't really map to
       | modern equipment. I've struggled to work out a backup scheme that
       | is equally adaptable to the needs of a small business, a home
       | network or an individual.
       | 
       | Ironically, it's hardest for the individual; a modern business is
       | finished if it loses all its data. For an individual (or even a
       | hobby network), total data-loss is painful, but not usually an
       | existential risk. So it's harder to justify keeping everything in
       | triplicate.
        
         | gsich wrote:
         | For source integrity there are other methods available.
         | Checksums, parity, RAID. I don't see this as a backup problem.
        
           | denton-scratch wrote:
           | Those are checks of binary integrity. They can't confirm that
           | data hasn't been deliberately or accidentally replaced, added
           | or deleted.
           | 
           | I would like to have a grandfather, a week old; a father, a
           | day old; and a son, being last night's backup. All on
           | different media, amd ideally not connected to the source
           | machine.
        
             | gsich wrote:
             | This is not a backup problem.
        
       | kunagi7 wrote:
       | I learned this the hard way a few years ago.
       | 
       | Having several drives is not enough. I used to keep my important
       | data replicated on 3 drives, from different brands, different
       | capacities, 1 internal, 2 external.
       | 
       | One day the internal drive failed, the next day one of the
       | external drives also failed. So... I panicked, shut everything
       | down and bought a brand new hard drive (1tb). While copying from
       | the third drive it also stopped working. So, I had a triple drive
       | failure. I managed to recover most of my data by freezing the
       | external drives and copying from them (until they heated up and
       | had to freeze them again).
        
         | H8crilA wrote:
         | What are the odds. Did you maybe bring some object with some
         | radioactive material inside?
        
           | ComputerGuru wrote:
           | _This may or may not be the case here, but generally
           | speaking..._
           | 
           | Pretty high for any parity-based multi-disk system. The
           | remaining disks get stress tested when a disk fails and you
           | need to copy everything over to the replacement. It's why
           | RAID5 is no longer sufficient with today's disk sizes and why
           | a RAID10 (which can only lose any one disk plus a specific
           | other disk) is actually real-world safer than RAID6 (which
           | can lose _any_ two disks).
        
           | kunagi7 wrote:
           | Who knows? This happened a decade ago. Maybe it was an
           | electric issue... But that would fry the drive's motherboard
           | instead of making the hard drives click?
        
         | lloeki wrote:
         | In 2005 on a Saturday evening, I lost every bit of digital data
         | I had, in spite of following (and even exceeding) this rule. I
         | had:
         | 
         | - One live copy of data on my laptop
         | 
         | - One copy on external hard disk, updated ~daily, on-site
         | (home)
         | 
         | - One copy on external hard disk, different brand and age,
         | updated ~bi-weekly, off-site (work)
         | 
         | - Immutable copies on optical media, persisted ~once a month,
         | on-site
         | 
         | Data on the laptop was lost due to operational error: I fat-
         | fingered a command and destroyed the partition table and part
         | of the leading data on disk. Being a reasonably fast disk this
         | ate a lot of structurally critical data quickly. Recovering the
         | filesystem would be really hard, but I had a two days old
         | backup, so didn't think much of it.
         | 
         | Now, to the local backup. I booted up a live cd, rebuilt a
         | partition layout, plugged the disk in, and started restoring
         | data. Reboot, and it seemed to work, mostly, but some things
         | that should were not. Immediately jumped to look at the
         | recovered data, it was severely corrupted. Diffed some files
         | and compared to the "originals" (i.e from the backup) and they
         | were identical: data on the backup disk was hopelessly mangled
         | even though the hardware was fine. A cursory analysis seemed to
         | highlight a software bug (filesystem code? drive firmware?
         | whatever, the issue had some logical consistency to it that
         | made it obvious it was unrecoverable, which was my sole goal at
         | this point). And I _just_ restored it over what remained of
         | perfectly valid - if difficultly reachable - data, essentially
         | scrubbing the laptop disk. Sweat was starting to build up.
         | 
         | Okay, optical copies were next, even if older. Surely this
         | would get my heart rate down. I put the disk in, closed the
         | tray, and heard the sound of a rattling helicopter. I stored
         | the disk in a closet which I thought would be safe, but it
         | turns out the hot water pipes for the flat above were running
         | behind a thin wall, which build up enough heat over time inside
         | the closet to slowly warp the disk. Well, one of them, because
         | I was paranoid enough to have three disks for a 3 month
         | rotation; but while the other disks were geometrically fine
         | (maybe due to being a different brand), they were stored for a
         | longer amount of time and their data suffered much bitrot. This
         | was going to be a long Sunday.
         | 
         | Back at work on Monday, the final disk immediately emitted an
         | ominous clicking noise right away. Shortly after it snapped,
         | never to power on again. I could maybe recover data straight
         | from the platters if I sent it to some firm for a hefty pile of
         | cash, which I had none at hand, neither at that time nor in the
         | foreseeable future.
         | 
         | So, in order I experienced: an operational error, a logical
         | error, an environmental issue, and a hardware fault. Luck had
         | it that I had a second computer temporarily lent to me, which I
         | toyed with and where some of my most recent work files turned
         | out to lie from a week before, so I could resume putting food
         | on the table quickly. No amount of hackery was able to restore
         | any meaningful data, so I lost about 10 years of digital photos
         | and older work archives.
         | 
         | Psychologically it was fairly interesting, because I thought I
         | would be enraged at myself for multiple reasons, but the
         | perspective of such ridiculous odds of this happening turned
         | the whole thing into a very contemplative experience shortly
         | after.
        
           | denton-scratch wrote:
           | This evokes feelings!
           | 
           | I've never had a total catastrophe, but I have had a chain of
           | independently-unlikely faults that combined together to
           | create a once-in-a-lifetime disaster.
           | 
           | This has happened to me many times during my life.
        
       | scrooched_moose wrote:
       | Are there guidelines for how long a given type of media is
       | considered stable?
       | 
       | From experience, hard drives seem safe on the order of years;
       | I've spun drives back up from the early 2000s and they are fully
       | intact. The lifespan of burned optical media is/was counted in
       | minutes. Various flash memory is somewhere in between - I've had
       | all kinda of cheap flash drives die.
       | 
       | Even with the multiple media forms they need to be refreshed at
       | some frequency, and I don't know how often that should be.
        
         | ComputerGuru wrote:
         | We've come a long way since CD-R; I said this earlier but I
         | guess it's worth copy-and-pasting here:
         | 
         | > There are Blu-ray Discs specifically engineered for long-term
         | archiving and have BER guarantees (anomalous bit read per gb of
         | data stored per year archived or something).
         | 
         | Now these are obviously simulated numbers (the tech isn't even
         | old enough to test) but it's a start: it means people are at
         | least considering the right questions.
         | 
         | I wouldn't write sensitive to a Blu-ray directly unless it was
         | the kind of data where a bit-flip is not a huge deal (eg a
         | backup of users' profile images where there are many small
         | files, a bit flip affects the content but doesn't compromise
         | the overall data, errors aren't cascading, etc). There's
         | already bit error correction baked into the analog <-> digital
         | transition layer but it's not great - but fortunately efficient
         | bit error correction at the file level has been a thing since
         | before binaries on Usenet. Stick some PAR2 files on the Blu-ray
         | or even serve your backup as a Blu-ray Disc plus a DVD-R
         | stuffed to the brim with PAR2 data for the former (I prefer the
         | first approach).
        
         | gregsadetsky wrote:
         | I see a lot of results on google for "data storage lifespan"
         | 
         | Trying to find an authorative source (loc.gov, archive.org,
         | etc.), I found this, which is not a full answer, but gets into
         | interesting details: "Table 2 - the relative stability of
         | optical disc formats" [0] -- from >100 years down to ... 5-10!
         | 
         | [0] https://www.canada.ca/en/conservation-
         | institute/services/con...
        
       | blakesterz wrote:
       | This is from 2019, not that the advice ever goes out of date.
       | 
       | It's crazy how the seemingly easiest most basic security/backup
       | advice is so easy to give, and so hard to actually do. 3-2-1, so
       | easy to teach and remember! In reality, at any kind of scale, not
       | so easy to do.
       | 
       | I am constantly reminded just how hard every aspect of security
       | really is to do. Even for the little/basic stuff.
        
         | philips wrote:
         | For local documents I find it way easier than cloud stuff.
         | 
         | 1. Local disk
         | 
         | 2. Time machine ssd
         | 
         | 3. Backblaze backup agent
         | 
         | For cloud stuff the services make it so hard. I have been
         | working on a service off and on to backup Google photos to an
         | SD card and then mail it to folks. And the amount of
         | limitations, rate limiting, and random errors out of Google's
         | API can be frustrating.
        
           | ghaff wrote:
           | And you can basically just as easily backup to two Time
           | Machine drives for just a little extra piece of mind. Not
           | sure it makes sense to use an SSD for that application
           | though.
        
           | wizzwizz4 wrote:
           | Try Google Takeout.
        
             | philips wrote:
             | It isn't automated. And in the case of the thing I am
             | building I can't help other people because you can't
             | delegate the creation of a takeout either.
        
               | wizzwizz4 wrote:
               | > _you can 't delegate the creation of a takeout either._
               | 
               | Not through Google's APIs. But you can, through
               | adversarial interoperability.
        
         | deckard1 wrote:
         | The difficulty of 3-2-1 increases as your data size goes up.
         | 
         | For me, the only sane thing to do is partitioning.
         | 
         | My first group would be data that if I were to lose it would
         | cause great pain. I keep the size of this group as small as
         | possible. A few hundred megs _or less_. You have a live copy, a
         | backup on a USB thumb stick or drive, and a copy you email
         | someone or snail mail a USB drive. It 's simple to deal with.
         | It has to be, because it's critical.
         | 
         | My second group is data that is important but not a serious
         | threat to me. Photos and videos, mostly. This second group is
         | where the headache starts and logistics, cost, and time become
         | an issue. Offsite backup is either running to the bank deposit
         | box (time consuming), or upload to cloud (also time consuming,
         | and expensive). Containing the bit rot becomes a futile
         | exercise. Especially considering most people aren't running ECC
         | RAM and end-to-end ZFS with redundancy (for recovery) requires
         | significant expertise and time. Parchive files are the best bet
         | for most people.
         | 
         | Finally, my third group. I have a NAS with a simple mirrored
         | ZFS setup. Two huge drives. I'll probably add a 3rd drive for
         | added redundancy. There is no backup. This data I don't care
         | that much about. I'd hate to lose it. But I'd hate backing it
         | up much more. I don't live for my data, my data lives for me.
         | 
         | You have to ruthlessly prune data that you care to keep. Just
         | like the burden of owning a boat or an overly large house,
         | there is a burden to too much data you care about. The mistake
         | a lot of people make is treating all data the same. Then they
         | end up with terabytes of data of unequal importance and get
         | sloppy protecting the tiny amount of data that truly matters.
        
       | throwaway2331 wrote:
       | Recently learned this lesson.
       | 
       | Had 2 backups (1 SD card, 1 HDD) for my "Document" folder.
       | 
       | I was trying to replace my GRUB MBR bootloader for REFind's EFI
       | so I could dualboot on a new laptop (and swap-in my old one's SSD
       | without having to reinsall the whole system: Arch).
       | Unfortunately, the boot partition was too small, and needed to be
       | re-sized from 512MB to 1GB. Foolishly, and in a rush, I thought
       | using gparted to change the partition boundaries (shrink the root
       | partition by 512MB from the beginning, and stretch the boot
       | partition to 1GB from the end) was the answer.
       | 
       | I completely forgot that EXT4 has a superblock at the beginning,
       | so now it was gone, and the root partition was completely
       | unmountable -- and fsck was of no use.
       | 
       | So I scramble to find my backups (to decide whether or not I
       | should figure out how to fix this), and realize that tiny little
       | SD card was missing, and my HDD backup was completely
       | unmountable.
       | 
       | Truly, a major fuck-up.
       | 
       | Thankfully, I didn't write anything to the boot partition, so
       | throwing a Hail Mary and simply resizing the partitions back to
       | their exact original sizes (thankfully x2 my TTS history was
       | useful), allowed the root drive to mount without a hitch.
       | 
       | I was close to losing all of my KeepPassXC passwords and private
       | keys due to shear idiocy.
       | 
       | In the end, I set up "cloud" backups (second storage media type,
       | and long and far away), switched to Debian, and continued on my
       | merry way.
        
         | travisby wrote:
         | > so throwing a Hail Mary and simply resizing the partitions
         | back to their exact original sizes
         | 
         | I've made similar mistakes (well, the same effect, but for
         | different reasons. I thought I was over a remote session when I
         | was local and made an entire new partition layout).
         | 
         | `testdisk` was able to introspect what partition boundaries
         | _were_ and rebuild the partition table.
         | 
         | If you're ever in a similar situation and either don't remember
         | the exact boundaries or don't trust yourself to recreate them,
         | check it out!
        
         | zamadatix wrote:
         | To be fair I think the fuck-up was on gparted's part, it's
         | supposed to handle this kind of thing for you during the
         | resize. That being said no tool is ever bug free hence backups
         | :).
        
         | [deleted]
        
       | CGamesPlay wrote:
       | I never really liked the 3-2-1 rule because it feels too
       | specific: while it works, simpler solutions also provide the same
       | level of reliability.
       | 
       | I think about backups in terms of blast radius. 1) The local
       | machine has the working copy of data and a local backup as
       | permitted by free space. The smallest blast radius where I lose
       | data is "my laptop hard drive fails". 2) My external drive has
       | another backup. The new blast radius is "my house burns down". 3)
       | I maintain a cloud backup. The new blast radius is "a catastrophe
       | on a global scale".
       | 
       | Any two of these backups can fail and the data is still
       | salvageable.
        
         | Someone1234 wrote:
         | > while it works, simpler solutions also provide the same level
         | of reliability.
         | 
         | It exists and is important because many backup strategies are
         | broken and people don't realize it.
         | 
         | For example your own strategy treats a PC's local storage and
         | an external drive as distinct backups when in reality you've
         | only evaluated hardware failures when formulating it and not
         | malicious actors. In 3-2-1, in particular the media type + off-
         | site thing, tries to "trick" you into having a backup which
         | isn't accessible from the same system that it is backing up
         | (i.e. offline backups).
         | 
         | Your backup strategy has been used almost verbatim by multiple
         | institutions who got cryto-locked. The external drive was hit
         | and then the cloud backup service happily synced the now
         | encrypted files. They went from "backed up in three places" to
         | backed up in zero places, and are now calling the cloud
         | provider hoping that their backups had unencrypted copies in
         | them.
         | 
         | 3-2-1 isn't simple, but it is good, and that's what it tries to
         | be.
        
           | toastedwedge wrote:
           | Backup versioning is the important distinction here. A backup
           | on its own is only good against loss, but a backup with
           | multiple snapshots in varying states in place protects
           | against the threat of crypto-locked cloud synchronization.
           | 
           | Of course if the provider is locked then it's a moot point.
        
             | Someone1234 wrote:
             | Some backup versioning schemes are disabled on purpose by
             | some cryto-malware.
        
           | CGamesPlay wrote:
           | What you're saying is that a critical piece of the 3-2-1 rule
           | is a piece that isn't actually prescribed by the 3-2-1 rule:
           | that you have "a backup which isn't accessible from the same
           | system that it is backing up (i.e. offline backups)." Another
           | reason that I don't think this "rule of thumb" as useful as
           | it purports to be.
           | 
           | Presumably the rule was invented before ransomware was a
           | thing, so perhaps it gets a pass for not anticipating
           | versions, but yeah: the rule of thumb for the modern world
           | probably includes something about backup versions.
        
           | gregsadetsky wrote:
           | In my DIY backup setups, I'm relying on rsync.net's snapshots
           | [0] and also the fact that the AWS S3 bucket I'm
           | automatically copying stuff to has 'versioning' enabled. [1]
           | 
           | Is that good enough in your point of view?
           | 
           | Thanks
           | 
           | [0] https://www.rsync.net/resources/howto/snapshots.html (not
           | affiliated, just a happy customer)
           | 
           | [1] https://docs.aws.amazon.com/AmazonS3/latest/userguide/Ver
           | sio... (ditto)
        
             | slownews45 wrote:
             | For automated ransomewhere this works.
             | 
             | For targeted - it often doesn't because often your AWS keys
             | are on the system doing the back up and have permissions to
             | delete items etc.
             | 
             | Of course, this is why S3 allows you to set and object lock
             | rule (ie, 30 days is plenty) that means even you (or your
             | computer) can't go and delete those online backups.
        
         | FullyFunctional wrote:
         | IMO it's a stupid rule, but it's not for us, it's for people
         | who don't even have this.
         | 
         | I think there's an element that is as important as keeping a
         | backup safe against a state-wide wildfire and that's
         | AUTOMATION. If it isn't automated then chances are that your
         | backup is very old once you finally need it.
        
           | hackernudes wrote:
           | Also need monitoring then!
        
         | ska wrote:
         | > simpler solutions also provide the same level of reliability.
         | 
         | far more simpler solutions exist that don't provide anything
         | like reliability.
         | 
         | Things area more nuanced these days than when this "rule" was
         | first formulated, but I suspect it's still true that the vast
         | majority of people would be far better off with following it
         | than whatever they are doing now. Doubly true of personal use.
        
           | CGamesPlay wrote:
           | Yes, this is exactly my point. I think we need a "2.0"
           | version of this rule that better encapsulates what a good
           | backup strategy is.
        
       ___________________________________________________________________
       (page generated 2021-11-24 23:00 UTC)